Test Report: KVM_Linux_crio 18711

                    
                      d0c8b6a0bda25d1a1bd2a775bc56b8f16412b6e8:2024-04-22:34150
                    
                

Test fail (14/221)

x
+
TestAddons/parallel/Ingress (155.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-649657 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-649657 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-649657 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4635a414-2076-41e6-b935-fd98104af18f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4635a414-2076-41e6-b935-fd98104af18f] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.003973893s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-649657 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-649657 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.220617475s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-649657 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-649657 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.194
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-649657 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-649657 addons disable ingress-dns --alsologtostderr -v=1: (1.257456073s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-649657 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-649657 addons disable ingress --alsologtostderr -v=1: (7.896031958s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-649657 -n addons-649657
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-649657 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-649657 logs -n 25: (1.425627131s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-205366 | jenkins | v1.33.0 | 22 Apr 24 10:38 UTC |                     |
	|         | -p download-only-205366                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.0 | 22 Apr 24 10:38 UTC | 22 Apr 24 10:38 UTC |
	| delete  | -p download-only-205366                                                                     | download-only-205366 | jenkins | v1.33.0 | 22 Apr 24 10:38 UTC | 22 Apr 24 10:38 UTC |
	| delete  | -p download-only-692083                                                                     | download-only-692083 | jenkins | v1.33.0 | 22 Apr 24 10:38 UTC | 22 Apr 24 10:38 UTC |
	| delete  | -p download-only-205366                                                                     | download-only-205366 | jenkins | v1.33.0 | 22 Apr 24 10:38 UTC | 22 Apr 24 10:38 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-683094 | jenkins | v1.33.0 | 22 Apr 24 10:38 UTC |                     |
	|         | binary-mirror-683094                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:40437                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-683094                                                                     | binary-mirror-683094 | jenkins | v1.33.0 | 22 Apr 24 10:38 UTC | 22 Apr 24 10:38 UTC |
	| addons  | enable dashboard -p                                                                         | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:38 UTC |                     |
	|         | addons-649657                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:38 UTC |                     |
	|         | addons-649657                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-649657 --wait=true                                                                | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:38 UTC | 22 Apr 24 10:41 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-649657 addons disable                                                                | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:42 UTC | 22 Apr 24 10:42 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-649657 ip                                                                            | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:42 UTC | 22 Apr 24 10:42 UTC |
	| addons  | addons-649657 addons disable                                                                | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:42 UTC | 22 Apr 24 10:42 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:42 UTC | 22 Apr 24 10:42 UTC |
	|         | addons-649657                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-649657 ssh cat                                                                       | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:42 UTC | 22 Apr 24 10:42 UTC |
	|         | /opt/local-path-provisioner/pvc-60f66f58-3d14-4dd8-976b-05bdb591f503_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-649657 addons disable                                                                | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:42 UTC | 22 Apr 24 10:42 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:42 UTC | 22 Apr 24 10:42 UTC |
	|         | -p addons-649657                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:42 UTC | 22 Apr 24 10:42 UTC |
	|         | addons-649657                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:42 UTC | 22 Apr 24 10:42 UTC |
	|         | -p addons-649657                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-649657 addons                                                                        | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:42 UTC | 22 Apr 24 10:42 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-649657 ssh curl -s                                                                   | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:42 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-649657 addons                                                                        | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:42 UTC | 22 Apr 24 10:42 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-649657 ip                                                                            | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:44 UTC | 22 Apr 24 10:44 UTC |
	| addons  | addons-649657 addons disable                                                                | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:44 UTC | 22 Apr 24 10:44 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-649657 addons disable                                                                | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:44 UTC | 22 Apr 24 10:45 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 10:38:23
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 10:38:23.079810   15606 out.go:291] Setting OutFile to fd 1 ...
	I0422 10:38:23.080046   15606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 10:38:23.080054   15606 out.go:304] Setting ErrFile to fd 2...
	I0422 10:38:23.080059   15606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 10:38:23.080271   15606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
	I0422 10:38:23.080916   15606 out.go:298] Setting JSON to false
	I0422 10:38:23.081723   15606 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1246,"bootTime":1713781057,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 10:38:23.081781   15606 start.go:139] virtualization: kvm guest
	I0422 10:38:23.083798   15606 out.go:177] * [addons-649657] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 10:38:23.085280   15606 out.go:177]   - MINIKUBE_LOCATION=18711
	I0422 10:38:23.085241   15606 notify.go:220] Checking for updates...
	I0422 10:38:23.086718   15606 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 10:38:23.088063   15606 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18711-7633/kubeconfig
	I0422 10:38:23.089354   15606 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 10:38:23.090612   15606 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 10:38:23.091947   15606 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 10:38:23.093438   15606 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 10:38:23.124707   15606 out.go:177] * Using the kvm2 driver based on user configuration
	I0422 10:38:23.126058   15606 start.go:297] selected driver: kvm2
	I0422 10:38:23.126074   15606 start.go:901] validating driver "kvm2" against <nil>
	I0422 10:38:23.126089   15606 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 10:38:23.126747   15606 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 10:38:23.126830   15606 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18711-7633/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 10:38:23.140835   15606 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 10:38:23.140884   15606 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0422 10:38:23.141113   15606 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 10:38:23.141189   15606 cni.go:84] Creating CNI manager for ""
	I0422 10:38:23.141205   15606 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 10:38:23.141215   15606 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0422 10:38:23.141274   15606 start.go:340] cluster config:
	{Name:addons-649657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-649657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 10:38:23.141370   15606 iso.go:125] acquiring lock: {Name:mkb6ac9fd17ffabc92a94047094130aad6203a95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 10:38:23.144113   15606 out.go:177] * Starting "addons-649657" primary control-plane node in "addons-649657" cluster
	I0422 10:38:23.145265   15606 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 10:38:23.145301   15606 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0422 10:38:23.145313   15606 cache.go:56] Caching tarball of preloaded images
	I0422 10:38:23.145394   15606 preload.go:173] Found /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 10:38:23.145406   15606 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 10:38:23.145690   15606 profile.go:143] Saving config to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/config.json ...
	I0422 10:38:23.145715   15606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/config.json: {Name:mk9bfe842d09f1f35d378a2cdb4c6d5de6c57750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 10:38:23.145841   15606 start.go:360] acquireMachinesLock for addons-649657: {Name:mk5cb9b294e703b264c1f97ac968ffd01e93b576 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 10:38:23.145898   15606 start.go:364] duration metric: took 41.92µs to acquireMachinesLock for "addons-649657"
	I0422 10:38:23.145930   15606 start.go:93] Provisioning new machine with config: &{Name:addons-649657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterNa
me:addons-649657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 10:38:23.146001   15606 start.go:125] createHost starting for "" (driver="kvm2")
	I0422 10:38:23.147636   15606 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0422 10:38:23.147754   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:38:23.147795   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:38:23.161398   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36741
	I0422 10:38:23.161866   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:38:23.162402   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:38:23.162423   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:38:23.162793   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:38:23.163031   15606 main.go:141] libmachine: (addons-649657) Calling .GetMachineName
	I0422 10:38:23.163187   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:38:23.163371   15606 start.go:159] libmachine.API.Create for "addons-649657" (driver="kvm2")
	I0422 10:38:23.163408   15606 client.go:168] LocalClient.Create starting
	I0422 10:38:23.163449   15606 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem
	I0422 10:38:23.231391   15606 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem
	I0422 10:38:23.364680   15606 main.go:141] libmachine: Running pre-create checks...
	I0422 10:38:23.364704   15606 main.go:141] libmachine: (addons-649657) Calling .PreCreateCheck
	I0422 10:38:23.365240   15606 main.go:141] libmachine: (addons-649657) Calling .GetConfigRaw
	I0422 10:38:23.365667   15606 main.go:141] libmachine: Creating machine...
	I0422 10:38:23.365683   15606 main.go:141] libmachine: (addons-649657) Calling .Create
	I0422 10:38:23.365837   15606 main.go:141] libmachine: (addons-649657) Creating KVM machine...
	I0422 10:38:23.366932   15606 main.go:141] libmachine: (addons-649657) DBG | found existing default KVM network
	I0422 10:38:23.367818   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:23.367670   15628 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0422 10:38:23.367858   15606 main.go:141] libmachine: (addons-649657) DBG | created network xml: 
	I0422 10:38:23.367882   15606 main.go:141] libmachine: (addons-649657) DBG | <network>
	I0422 10:38:23.367896   15606 main.go:141] libmachine: (addons-649657) DBG |   <name>mk-addons-649657</name>
	I0422 10:38:23.367909   15606 main.go:141] libmachine: (addons-649657) DBG |   <dns enable='no'/>
	I0422 10:38:23.367918   15606 main.go:141] libmachine: (addons-649657) DBG |   
	I0422 10:38:23.367932   15606 main.go:141] libmachine: (addons-649657) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0422 10:38:23.367943   15606 main.go:141] libmachine: (addons-649657) DBG |     <dhcp>
	I0422 10:38:23.367953   15606 main.go:141] libmachine: (addons-649657) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0422 10:38:23.367964   15606 main.go:141] libmachine: (addons-649657) DBG |     </dhcp>
	I0422 10:38:23.367975   15606 main.go:141] libmachine: (addons-649657) DBG |   </ip>
	I0422 10:38:23.367985   15606 main.go:141] libmachine: (addons-649657) DBG |   
	I0422 10:38:23.367995   15606 main.go:141] libmachine: (addons-649657) DBG | </network>
	I0422 10:38:23.368009   15606 main.go:141] libmachine: (addons-649657) DBG | 
	I0422 10:38:23.373183   15606 main.go:141] libmachine: (addons-649657) DBG | trying to create private KVM network mk-addons-649657 192.168.39.0/24...
	I0422 10:38:23.437105   15606 main.go:141] libmachine: (addons-649657) DBG | private KVM network mk-addons-649657 192.168.39.0/24 created
	I0422 10:38:23.437182   15606 main.go:141] libmachine: (addons-649657) Setting up store path in /home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657 ...
	I0422 10:38:23.437213   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:23.437102   15628 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 10:38:23.437231   15606 main.go:141] libmachine: (addons-649657) Building disk image from file:///home/jenkins/minikube-integration/18711-7633/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0422 10:38:23.437263   15606 main.go:141] libmachine: (addons-649657) Downloading /home/jenkins/minikube-integration/18711-7633/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18711-7633/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0422 10:38:23.664867   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:23.664712   15628 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa...
	I0422 10:38:23.779577   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:23.779438   15628 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/addons-649657.rawdisk...
	I0422 10:38:23.779601   15606 main.go:141] libmachine: (addons-649657) DBG | Writing magic tar header
	I0422 10:38:23.779614   15606 main.go:141] libmachine: (addons-649657) DBG | Writing SSH key tar header
	I0422 10:38:23.779625   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:23.779554   15628 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657 ...
	I0422 10:38:23.779645   15606 main.go:141] libmachine: (addons-649657) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657
	I0422 10:38:23.779670   15606 main.go:141] libmachine: (addons-649657) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657 (perms=drwx------)
	I0422 10:38:23.779680   15606 main.go:141] libmachine: (addons-649657) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633/.minikube/machines (perms=drwxr-xr-x)
	I0422 10:38:23.779712   15606 main.go:141] libmachine: (addons-649657) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633/.minikube/machines
	I0422 10:38:23.779769   15606 main.go:141] libmachine: (addons-649657) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633/.minikube (perms=drwxr-xr-x)
	I0422 10:38:23.779786   15606 main.go:141] libmachine: (addons-649657) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 10:38:23.779795   15606 main.go:141] libmachine: (addons-649657) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633
	I0422 10:38:23.779800   15606 main.go:141] libmachine: (addons-649657) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0422 10:38:23.779808   15606 main.go:141] libmachine: (addons-649657) DBG | Checking permissions on dir: /home/jenkins
	I0422 10:38:23.779816   15606 main.go:141] libmachine: (addons-649657) DBG | Checking permissions on dir: /home
	I0422 10:38:23.779828   15606 main.go:141] libmachine: (addons-649657) DBG | Skipping /home - not owner
	I0422 10:38:23.779881   15606 main.go:141] libmachine: (addons-649657) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633 (perms=drwxrwxr-x)
	I0422 10:38:23.779905   15606 main.go:141] libmachine: (addons-649657) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0422 10:38:23.779915   15606 main.go:141] libmachine: (addons-649657) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0422 10:38:23.779926   15606 main.go:141] libmachine: (addons-649657) Creating domain...
	I0422 10:38:23.781015   15606 main.go:141] libmachine: (addons-649657) define libvirt domain using xml: 
	I0422 10:38:23.781039   15606 main.go:141] libmachine: (addons-649657) <domain type='kvm'>
	I0422 10:38:23.781050   15606 main.go:141] libmachine: (addons-649657)   <name>addons-649657</name>
	I0422 10:38:23.781058   15606 main.go:141] libmachine: (addons-649657)   <memory unit='MiB'>4000</memory>
	I0422 10:38:23.781068   15606 main.go:141] libmachine: (addons-649657)   <vcpu>2</vcpu>
	I0422 10:38:23.781087   15606 main.go:141] libmachine: (addons-649657)   <features>
	I0422 10:38:23.781120   15606 main.go:141] libmachine: (addons-649657)     <acpi/>
	I0422 10:38:23.781223   15606 main.go:141] libmachine: (addons-649657)     <apic/>
	I0422 10:38:23.781246   15606 main.go:141] libmachine: (addons-649657)     <pae/>
	I0422 10:38:23.781257   15606 main.go:141] libmachine: (addons-649657)     
	I0422 10:38:23.781264   15606 main.go:141] libmachine: (addons-649657)   </features>
	I0422 10:38:23.781273   15606 main.go:141] libmachine: (addons-649657)   <cpu mode='host-passthrough'>
	I0422 10:38:23.781286   15606 main.go:141] libmachine: (addons-649657)   
	I0422 10:38:23.781317   15606 main.go:141] libmachine: (addons-649657)   </cpu>
	I0422 10:38:23.781337   15606 main.go:141] libmachine: (addons-649657)   <os>
	I0422 10:38:23.781350   15606 main.go:141] libmachine: (addons-649657)     <type>hvm</type>
	I0422 10:38:23.781361   15606 main.go:141] libmachine: (addons-649657)     <boot dev='cdrom'/>
	I0422 10:38:23.781372   15606 main.go:141] libmachine: (addons-649657)     <boot dev='hd'/>
	I0422 10:38:23.781383   15606 main.go:141] libmachine: (addons-649657)     <bootmenu enable='no'/>
	I0422 10:38:23.781393   15606 main.go:141] libmachine: (addons-649657)   </os>
	I0422 10:38:23.781403   15606 main.go:141] libmachine: (addons-649657)   <devices>
	I0422 10:38:23.781415   15606 main.go:141] libmachine: (addons-649657)     <disk type='file' device='cdrom'>
	I0422 10:38:23.781438   15606 main.go:141] libmachine: (addons-649657)       <source file='/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/boot2docker.iso'/>
	I0422 10:38:23.781454   15606 main.go:141] libmachine: (addons-649657)       <target dev='hdc' bus='scsi'/>
	I0422 10:38:23.781465   15606 main.go:141] libmachine: (addons-649657)       <readonly/>
	I0422 10:38:23.781477   15606 main.go:141] libmachine: (addons-649657)     </disk>
	I0422 10:38:23.781489   15606 main.go:141] libmachine: (addons-649657)     <disk type='file' device='disk'>
	I0422 10:38:23.781510   15606 main.go:141] libmachine: (addons-649657)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0422 10:38:23.781531   15606 main.go:141] libmachine: (addons-649657)       <source file='/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/addons-649657.rawdisk'/>
	I0422 10:38:23.781549   15606 main.go:141] libmachine: (addons-649657)       <target dev='hda' bus='virtio'/>
	I0422 10:38:23.781556   15606 main.go:141] libmachine: (addons-649657)     </disk>
	I0422 10:38:23.781568   15606 main.go:141] libmachine: (addons-649657)     <interface type='network'>
	I0422 10:38:23.781580   15606 main.go:141] libmachine: (addons-649657)       <source network='mk-addons-649657'/>
	I0422 10:38:23.781592   15606 main.go:141] libmachine: (addons-649657)       <model type='virtio'/>
	I0422 10:38:23.781603   15606 main.go:141] libmachine: (addons-649657)     </interface>
	I0422 10:38:23.781614   15606 main.go:141] libmachine: (addons-649657)     <interface type='network'>
	I0422 10:38:23.781626   15606 main.go:141] libmachine: (addons-649657)       <source network='default'/>
	I0422 10:38:23.781638   15606 main.go:141] libmachine: (addons-649657)       <model type='virtio'/>
	I0422 10:38:23.781648   15606 main.go:141] libmachine: (addons-649657)     </interface>
	I0422 10:38:23.781660   15606 main.go:141] libmachine: (addons-649657)     <serial type='pty'>
	I0422 10:38:23.781670   15606 main.go:141] libmachine: (addons-649657)       <target port='0'/>
	I0422 10:38:23.781681   15606 main.go:141] libmachine: (addons-649657)     </serial>
	I0422 10:38:23.781692   15606 main.go:141] libmachine: (addons-649657)     <console type='pty'>
	I0422 10:38:23.781713   15606 main.go:141] libmachine: (addons-649657)       <target type='serial' port='0'/>
	I0422 10:38:23.781726   15606 main.go:141] libmachine: (addons-649657)     </console>
	I0422 10:38:23.781736   15606 main.go:141] libmachine: (addons-649657)     <rng model='virtio'>
	I0422 10:38:23.781748   15606 main.go:141] libmachine: (addons-649657)       <backend model='random'>/dev/random</backend>
	I0422 10:38:23.781759   15606 main.go:141] libmachine: (addons-649657)     </rng>
	I0422 10:38:23.781770   15606 main.go:141] libmachine: (addons-649657)     
	I0422 10:38:23.781782   15606 main.go:141] libmachine: (addons-649657)     
	I0422 10:38:23.781792   15606 main.go:141] libmachine: (addons-649657)   </devices>
	I0422 10:38:23.781807   15606 main.go:141] libmachine: (addons-649657) </domain>
	I0422 10:38:23.781816   15606 main.go:141] libmachine: (addons-649657) 
	I0422 10:38:23.787082   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:22:b9:16 in network default
	I0422 10:38:23.787647   15606 main.go:141] libmachine: (addons-649657) Ensuring networks are active...
	I0422 10:38:23.787663   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:23.788321   15606 main.go:141] libmachine: (addons-649657) Ensuring network default is active
	I0422 10:38:23.788689   15606 main.go:141] libmachine: (addons-649657) Ensuring network mk-addons-649657 is active
	I0422 10:38:23.789159   15606 main.go:141] libmachine: (addons-649657) Getting domain xml...
	I0422 10:38:23.789765   15606 main.go:141] libmachine: (addons-649657) Creating domain...
	I0422 10:38:25.195662   15606 main.go:141] libmachine: (addons-649657) Waiting to get IP...
	I0422 10:38:25.196442   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:25.196852   15606 main.go:141] libmachine: (addons-649657) DBG | unable to find current IP address of domain addons-649657 in network mk-addons-649657
	I0422 10:38:25.196888   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:25.196826   15628 retry.go:31] will retry after 232.878498ms: waiting for machine to come up
	I0422 10:38:25.431389   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:25.431780   15606 main.go:141] libmachine: (addons-649657) DBG | unable to find current IP address of domain addons-649657 in network mk-addons-649657
	I0422 10:38:25.431848   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:25.431770   15628 retry.go:31] will retry after 346.743722ms: waiting for machine to come up
	I0422 10:38:25.780676   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:25.781106   15606 main.go:141] libmachine: (addons-649657) DBG | unable to find current IP address of domain addons-649657 in network mk-addons-649657
	I0422 10:38:25.781130   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:25.781068   15628 retry.go:31] will retry after 437.70568ms: waiting for machine to come up
	I0422 10:38:26.220719   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:26.221177   15606 main.go:141] libmachine: (addons-649657) DBG | unable to find current IP address of domain addons-649657 in network mk-addons-649657
	I0422 10:38:26.221200   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:26.221148   15628 retry.go:31] will retry after 438.886905ms: waiting for machine to come up
	I0422 10:38:26.661712   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:26.662109   15606 main.go:141] libmachine: (addons-649657) DBG | unable to find current IP address of domain addons-649657 in network mk-addons-649657
	I0422 10:38:26.662144   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:26.662065   15628 retry.go:31] will retry after 503.335056ms: waiting for machine to come up
	I0422 10:38:27.166635   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:27.167072   15606 main.go:141] libmachine: (addons-649657) DBG | unable to find current IP address of domain addons-649657 in network mk-addons-649657
	I0422 10:38:27.167126   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:27.167012   15628 retry.go:31] will retry after 798.067912ms: waiting for machine to come up
	I0422 10:38:27.967000   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:27.967462   15606 main.go:141] libmachine: (addons-649657) DBG | unable to find current IP address of domain addons-649657 in network mk-addons-649657
	I0422 10:38:27.967494   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:27.967412   15628 retry.go:31] will retry after 775.145721ms: waiting for machine to come up
	I0422 10:38:28.744013   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:28.744366   15606 main.go:141] libmachine: (addons-649657) DBG | unable to find current IP address of domain addons-649657 in network mk-addons-649657
	I0422 10:38:28.744389   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:28.744336   15628 retry.go:31] will retry after 1.114755525s: waiting for machine to come up
	I0422 10:38:29.860547   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:29.860983   15606 main.go:141] libmachine: (addons-649657) DBG | unable to find current IP address of domain addons-649657 in network mk-addons-649657
	I0422 10:38:29.861013   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:29.860902   15628 retry.go:31] will retry after 1.404380425s: waiting for machine to come up
	I0422 10:38:31.267452   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:31.267888   15606 main.go:141] libmachine: (addons-649657) DBG | unable to find current IP address of domain addons-649657 in network mk-addons-649657
	I0422 10:38:31.267914   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:31.267841   15628 retry.go:31] will retry after 2.048742661s: waiting for machine to come up
	I0422 10:38:33.318039   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:33.318537   15606 main.go:141] libmachine: (addons-649657) DBG | unable to find current IP address of domain addons-649657 in network mk-addons-649657
	I0422 10:38:33.318566   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:33.318500   15628 retry.go:31] will retry after 2.397547405s: waiting for machine to come up
	I0422 10:38:35.718109   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:35.718472   15606 main.go:141] libmachine: (addons-649657) DBG | unable to find current IP address of domain addons-649657 in network mk-addons-649657
	I0422 10:38:35.718491   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:35.718443   15628 retry.go:31] will retry after 2.840628225s: waiting for machine to come up
	I0422 10:38:38.562290   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:38.562755   15606 main.go:141] libmachine: (addons-649657) DBG | unable to find current IP address of domain addons-649657 in network mk-addons-649657
	I0422 10:38:38.562784   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:38.562721   15628 retry.go:31] will retry after 3.644606309s: waiting for machine to come up
	I0422 10:38:42.208497   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:42.208800   15606 main.go:141] libmachine: (addons-649657) DBG | unable to find current IP address of domain addons-649657 in network mk-addons-649657
	I0422 10:38:42.208819   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:42.208758   15628 retry.go:31] will retry after 4.598552626s: waiting for machine to come up
	I0422 10:38:46.811357   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:46.811868   15606 main.go:141] libmachine: (addons-649657) Found IP for machine: 192.168.39.194
	I0422 10:38:46.811893   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has current primary IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:46.811900   15606 main.go:141] libmachine: (addons-649657) Reserving static IP address...
	I0422 10:38:46.812299   15606 main.go:141] libmachine: (addons-649657) DBG | unable to find host DHCP lease matching {name: "addons-649657", mac: "52:54:00:fd:fb:c8", ip: "192.168.39.194"} in network mk-addons-649657
	I0422 10:38:46.880741   15606 main.go:141] libmachine: (addons-649657) Reserved static IP address: 192.168.39.194
	I0422 10:38:46.880816   15606 main.go:141] libmachine: (addons-649657) Waiting for SSH to be available...
	I0422 10:38:46.880833   15606 main.go:141] libmachine: (addons-649657) DBG | Getting to WaitForSSH function...
	I0422 10:38:46.883253   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:46.883720   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:46.883770   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:46.883946   15606 main.go:141] libmachine: (addons-649657) DBG | Using SSH client type: external
	I0422 10:38:46.883975   15606 main.go:141] libmachine: (addons-649657) DBG | Using SSH private key: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa (-rw-------)
	I0422 10:38:46.884005   15606 main.go:141] libmachine: (addons-649657) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.194 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 10:38:46.884021   15606 main.go:141] libmachine: (addons-649657) DBG | About to run SSH command:
	I0422 10:38:46.884036   15606 main.go:141] libmachine: (addons-649657) DBG | exit 0
	I0422 10:38:47.013234   15606 main.go:141] libmachine: (addons-649657) DBG | SSH cmd err, output: <nil>: 
	I0422 10:38:47.013518   15606 main.go:141] libmachine: (addons-649657) KVM machine creation complete!
	I0422 10:38:47.013809   15606 main.go:141] libmachine: (addons-649657) Calling .GetConfigRaw
	I0422 10:38:47.014321   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:38:47.014484   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:38:47.014646   15606 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0422 10:38:47.014663   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:38:47.015922   15606 main.go:141] libmachine: Detecting operating system of created instance...
	I0422 10:38:47.015936   15606 main.go:141] libmachine: Waiting for SSH to be available...
	I0422 10:38:47.015942   15606 main.go:141] libmachine: Getting to WaitForSSH function...
	I0422 10:38:47.015948   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:38:47.018209   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.018570   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:47.018601   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.018707   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:38:47.018860   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:38:47.019032   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:38:47.019164   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:38:47.019335   15606 main.go:141] libmachine: Using SSH client type: native
	I0422 10:38:47.019544   15606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0422 10:38:47.019559   15606 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0422 10:38:47.120092   15606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 10:38:47.120120   15606 main.go:141] libmachine: Detecting the provisioner...
	I0422 10:38:47.120130   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:38:47.122651   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.122999   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:47.123027   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.123137   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:38:47.123289   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:38:47.123420   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:38:47.123565   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:38:47.123725   15606 main.go:141] libmachine: Using SSH client type: native
	I0422 10:38:47.123875   15606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0422 10:38:47.123885   15606 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0422 10:38:47.230101   15606 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0422 10:38:47.230179   15606 main.go:141] libmachine: found compatible host: buildroot
	I0422 10:38:47.230191   15606 main.go:141] libmachine: Provisioning with buildroot...
	I0422 10:38:47.230203   15606 main.go:141] libmachine: (addons-649657) Calling .GetMachineName
	I0422 10:38:47.230476   15606 buildroot.go:166] provisioning hostname "addons-649657"
	I0422 10:38:47.230499   15606 main.go:141] libmachine: (addons-649657) Calling .GetMachineName
	I0422 10:38:47.230682   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:38:47.233015   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.233345   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:47.233374   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.233493   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:38:47.233665   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:38:47.233796   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:38:47.233932   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:38:47.234098   15606 main.go:141] libmachine: Using SSH client type: native
	I0422 10:38:47.234265   15606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0422 10:38:47.234276   15606 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-649657 && echo "addons-649657" | sudo tee /etc/hostname
	I0422 10:38:47.354203   15606 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-649657
	
	I0422 10:38:47.354226   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:38:47.356552   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.356911   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:47.356941   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.357090   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:38:47.357263   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:38:47.357419   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:38:47.357546   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:38:47.357728   15606 main.go:141] libmachine: Using SSH client type: native
	I0422 10:38:47.357904   15606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0422 10:38:47.357927   15606 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-649657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-649657/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-649657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 10:38:47.471583   15606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 10:38:47.471614   15606 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18711-7633/.minikube CaCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18711-7633/.minikube}
	I0422 10:38:47.471643   15606 buildroot.go:174] setting up certificates
	I0422 10:38:47.471658   15606 provision.go:84] configureAuth start
	I0422 10:38:47.471669   15606 main.go:141] libmachine: (addons-649657) Calling .GetMachineName
	I0422 10:38:47.471961   15606 main.go:141] libmachine: (addons-649657) Calling .GetIP
	I0422 10:38:47.474574   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.474929   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:47.474954   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.475091   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:38:47.476911   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.477192   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:47.477215   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.477277   15606 provision.go:143] copyHostCerts
	I0422 10:38:47.477348   15606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem (1078 bytes)
	I0422 10:38:47.477491   15606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem (1123 bytes)
	I0422 10:38:47.477570   15606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem (1679 bytes)
	I0422 10:38:47.477634   15606 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem org=jenkins.addons-649657 san=[127.0.0.1 192.168.39.194 addons-649657 localhost minikube]
	I0422 10:38:47.541200   15606 provision.go:177] copyRemoteCerts
	I0422 10:38:47.541260   15606 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 10:38:47.541281   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:38:47.543814   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.544125   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:47.544150   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.544321   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:38:47.544499   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:38:47.544622   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:38:47.544751   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:38:47.628141   15606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 10:38:47.655092   15606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0422 10:38:47.681181   15606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 10:38:47.707974   15606 provision.go:87] duration metric: took 236.304055ms to configureAuth
	I0422 10:38:47.708004   15606 buildroot.go:189] setting minikube options for container-runtime
	I0422 10:38:47.708190   15606 config.go:182] Loaded profile config "addons-649657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 10:38:47.708283   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:38:47.710930   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.711266   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:47.711288   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.711500   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:38:47.711683   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:38:47.711840   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:38:47.711940   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:38:47.712074   15606 main.go:141] libmachine: Using SSH client type: native
	I0422 10:38:47.712260   15606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0422 10:38:47.712278   15606 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 10:38:47.983213   15606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 10:38:47.983246   15606 main.go:141] libmachine: Checking connection to Docker...
	I0422 10:38:47.983257   15606 main.go:141] libmachine: (addons-649657) Calling .GetURL
	I0422 10:38:47.984383   15606 main.go:141] libmachine: (addons-649657) DBG | Using libvirt version 6000000
	I0422 10:38:47.986258   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.986571   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:47.986603   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.986755   15606 main.go:141] libmachine: Docker is up and running!
	I0422 10:38:47.986770   15606 main.go:141] libmachine: Reticulating splines...
	I0422 10:38:47.986782   15606 client.go:171] duration metric: took 24.82335883s to LocalClient.Create
	I0422 10:38:47.986808   15606 start.go:167] duration metric: took 24.823438049s to libmachine.API.Create "addons-649657"
	I0422 10:38:47.986823   15606 start.go:293] postStartSetup for "addons-649657" (driver="kvm2")
	I0422 10:38:47.986838   15606 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 10:38:47.986863   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:38:47.987084   15606 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 10:38:47.987106   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:38:47.988811   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.989089   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:47.989114   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.989275   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:38:47.989415   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:38:47.989574   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:38:47.989659   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:38:48.071918   15606 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 10:38:48.076737   15606 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 10:38:48.076761   15606 filesync.go:126] Scanning /home/jenkins/minikube-integration/18711-7633/.minikube/addons for local assets ...
	I0422 10:38:48.076856   15606 filesync.go:126] Scanning /home/jenkins/minikube-integration/18711-7633/.minikube/files for local assets ...
	I0422 10:38:48.076892   15606 start.go:296] duration metric: took 90.061633ms for postStartSetup
	I0422 10:38:48.076938   15606 main.go:141] libmachine: (addons-649657) Calling .GetConfigRaw
	I0422 10:38:48.077466   15606 main.go:141] libmachine: (addons-649657) Calling .GetIP
	I0422 10:38:48.079825   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:48.080163   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:48.080396   15606 profile.go:143] Saving config to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/config.json ...
	I0422 10:38:48.082002   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:48.082184   15606 start.go:128] duration metric: took 24.936172013s to createHost
	I0422 10:38:48.082207   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:38:48.084060   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:48.084380   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:48.084409   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:48.084478   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:38:48.084656   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:38:48.084812   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:38:48.084982   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:38:48.085127   15606 main.go:141] libmachine: Using SSH client type: native
	I0422 10:38:48.085307   15606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0422 10:38:48.085321   15606 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 10:38:48.185835   15606 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713782328.147187196
	
	I0422 10:38:48.185864   15606 fix.go:216] guest clock: 1713782328.147187196
	I0422 10:38:48.185874   15606 fix.go:229] Guest: 2024-04-22 10:38:48.147187196 +0000 UTC Remote: 2024-04-22 10:38:48.082197786 +0000 UTC m=+25.046682825 (delta=64.98941ms)
	I0422 10:38:48.185913   15606 fix.go:200] guest clock delta is within tolerance: 64.98941ms
	I0422 10:38:48.185918   15606 start.go:83] releasing machines lock for "addons-649657", held for 25.040010037s
	I0422 10:38:48.185937   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:38:48.186152   15606 main.go:141] libmachine: (addons-649657) Calling .GetIP
	I0422 10:38:48.188797   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:48.189155   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:48.189185   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:48.189338   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:38:48.189784   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:38:48.189962   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:38:48.190085   15606 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 10:38:48.190131   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:38:48.190184   15606 ssh_runner.go:195] Run: cat /version.json
	I0422 10:38:48.190210   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:38:48.193037   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:48.193127   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:48.193372   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:48.193399   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:48.193443   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:48.193473   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:48.193524   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:38:48.193656   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:38:48.193726   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:38:48.193802   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:38:48.193863   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:38:48.193905   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:38:48.193971   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:38:48.194039   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:38:48.294301   15606 ssh_runner.go:195] Run: systemctl --version
	I0422 10:38:48.300532   15606 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 10:38:48.466397   15606 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 10:38:48.477261   15606 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 10:38:48.477332   15606 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 10:38:48.495808   15606 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 10:38:48.495832   15606 start.go:494] detecting cgroup driver to use...
	I0422 10:38:48.495895   15606 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 10:38:48.515026   15606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 10:38:48.530178   15606 docker.go:217] disabling cri-docker service (if available) ...
	I0422 10:38:48.530238   15606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 10:38:48.544539   15606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 10:38:48.559329   15606 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 10:38:48.676790   15606 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 10:38:48.821698   15606 docker.go:233] disabling docker service ...
	I0422 10:38:48.821769   15606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 10:38:48.836738   15606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 10:38:48.850871   15606 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 10:38:48.988823   15606 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 10:38:49.113820   15606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 10:38:49.129599   15606 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 10:38:49.150667   15606 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 10:38:49.150721   15606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 10:38:49.163900   15606 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 10:38:49.163978   15606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 10:38:49.176687   15606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 10:38:49.189298   15606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 10:38:49.201573   15606 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 10:38:49.214322   15606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 10:38:49.227157   15606 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 10:38:49.247102   15606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 10:38:49.260407   15606 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 10:38:49.272003   15606 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 10:38:49.272068   15606 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 10:38:49.287435   15606 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 10:38:49.298831   15606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 10:38:49.411046   15606 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 10:38:49.561202   15606 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 10:38:49.561299   15606 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 10:38:49.566882   15606 start.go:562] Will wait 60s for crictl version
	I0422 10:38:49.566951   15606 ssh_runner.go:195] Run: which crictl
	I0422 10:38:49.571336   15606 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 10:38:49.607007   15606 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 10:38:49.607126   15606 ssh_runner.go:195] Run: crio --version
	I0422 10:38:49.643734   15606 ssh_runner.go:195] Run: crio --version
	I0422 10:38:49.674961   15606 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 10:38:49.676271   15606 main.go:141] libmachine: (addons-649657) Calling .GetIP
	I0422 10:38:49.678947   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:49.679310   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:49.679338   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:49.679516   15606 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0422 10:38:49.683969   15606 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 10:38:49.698457   15606 kubeadm.go:877] updating cluster {Name:addons-649657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-649657 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 10:38:49.698570   15606 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 10:38:49.698615   15606 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 10:38:49.735897   15606 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0422 10:38:49.735989   15606 ssh_runner.go:195] Run: which lz4
	I0422 10:38:49.740620   15606 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0422 10:38:49.745386   15606 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0422 10:38:49.745409   15606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0422 10:38:51.292185   15606 crio.go:462] duration metric: took 1.551598233s to copy over tarball
	I0422 10:38:51.292256   15606 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0422 10:38:53.918826   15606 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.626545528s)
	I0422 10:38:53.918856   15606 crio.go:469] duration metric: took 2.626642493s to extract the tarball
	I0422 10:38:53.918863   15606 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0422 10:38:53.957426   15606 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 10:38:54.000505   15606 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 10:38:54.000527   15606 cache_images.go:84] Images are preloaded, skipping loading
	I0422 10:38:54.000534   15606 kubeadm.go:928] updating node { 192.168.39.194 8443 v1.30.0 crio true true} ...
	I0422 10:38:54.000629   15606 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-649657 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-649657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 10:38:54.000692   15606 ssh_runner.go:195] Run: crio config
	I0422 10:38:54.050754   15606 cni.go:84] Creating CNI manager for ""
	I0422 10:38:54.050777   15606 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 10:38:54.050789   15606 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 10:38:54.050809   15606 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.194 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-649657 NodeName:addons-649657 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.194"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.194 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 10:38:54.050957   15606 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.194
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-649657"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.194
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.194"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 10:38:54.051033   15606 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 10:38:54.062581   15606 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 10:38:54.062650   15606 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 10:38:54.073389   15606 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0422 10:38:54.091376   15606 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 10:38:54.108950   15606 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0422 10:38:54.126873   15606 ssh_runner.go:195] Run: grep 192.168.39.194	control-plane.minikube.internal$ /etc/hosts
	I0422 10:38:54.130843   15606 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.194	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 10:38:54.144219   15606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 10:38:54.286714   15606 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 10:38:54.305842   15606 certs.go:68] Setting up /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657 for IP: 192.168.39.194
	I0422 10:38:54.305865   15606 certs.go:194] generating shared ca certs ...
	I0422 10:38:54.305879   15606 certs.go:226] acquiring lock for ca certs: {Name:mk0b77082b88c771d0b00be5267ca31dfee6f85a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 10:38:54.306016   15606 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key
	I0422 10:38:54.482881   15606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt ...
	I0422 10:38:54.482905   15606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt: {Name:mk573d0df2447a344243cd0320bc02744b0a0cb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 10:38:54.483060   15606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key ...
	I0422 10:38:54.483079   15606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key: {Name:mkbba892ad24803d33bdd9f0663ff134beb893a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 10:38:54.483146   15606 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key
	I0422 10:38:54.663136   15606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.crt ...
	I0422 10:38:54.663164   15606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.crt: {Name:mkfc6c26312d3b3e9e186927f92c57740e56d2fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 10:38:54.663310   15606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key ...
	I0422 10:38:54.663320   15606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key: {Name:mkcccec01632708a58b44c2b15326f02db98e409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 10:38:54.663420   15606 certs.go:256] generating profile certs ...
	I0422 10:38:54.663476   15606 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.key
	I0422 10:38:54.663490   15606 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt with IP's: []
	I0422 10:38:54.798329   15606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt ...
	I0422 10:38:54.798355   15606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: {Name:mkf225d80c3cb066317ff54ed4b5f84c6c5ea81f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 10:38:54.798496   15606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.key ...
	I0422 10:38:54.798507   15606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.key: {Name:mke8c68bb636e010b3bca0f2b152cfad1bee3b99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 10:38:54.798572   15606 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/apiserver.key.1a0dc645
	I0422 10:38:54.798588   15606 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/apiserver.crt.1a0dc645 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.194]
	I0422 10:38:55.008743   15606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/apiserver.crt.1a0dc645 ...
	I0422 10:38:55.008787   15606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/apiserver.crt.1a0dc645: {Name:mk4649aee81f9c78b4e81912b66088f7f2da2da0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 10:38:55.008927   15606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/apiserver.key.1a0dc645 ...
	I0422 10:38:55.008942   15606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/apiserver.key.1a0dc645: {Name:mk1b3e2578f7d6e80ed5a43ab9a055fbdd305496 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 10:38:55.009011   15606 certs.go:381] copying /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/apiserver.crt.1a0dc645 -> /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/apiserver.crt
	I0422 10:38:55.009101   15606 certs.go:385] copying /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/apiserver.key.1a0dc645 -> /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/apiserver.key
	I0422 10:38:55.009149   15606 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/proxy-client.key
	I0422 10:38:55.009166   15606 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/proxy-client.crt with IP's: []
	I0422 10:38:55.675924   15606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/proxy-client.crt ...
	I0422 10:38:55.675951   15606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/proxy-client.crt: {Name:mk611730275760f07d3caabedff965afa7b5b867 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 10:38:55.676102   15606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/proxy-client.key ...
	I0422 10:38:55.676113   15606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/proxy-client.key: {Name:mkcffa60ac09156fa9204336a51337aef6b00343 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 10:38:55.676261   15606 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem (1679 bytes)
	I0422 10:38:55.676292   15606 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem (1078 bytes)
	I0422 10:38:55.676316   15606 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem (1123 bytes)
	I0422 10:38:55.676338   15606 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem (1679 bytes)
	I0422 10:38:55.676926   15606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 10:38:55.705729   15606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 10:38:55.733859   15606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 10:38:55.761212   15606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0422 10:38:55.788128   15606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0422 10:38:55.814678   15606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0422 10:38:55.842103   15606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 10:38:55.870019   15606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 10:38:55.900081   15606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 10:38:55.930494   15606 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 10:38:55.948334   15606 ssh_runner.go:195] Run: openssl version
	I0422 10:38:55.954476   15606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 10:38:55.967462   15606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 10:38:55.972388   15606 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0422 10:38:55.972453   15606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 10:38:55.978653   15606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 10:38:55.990570   15606 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 10:38:55.995121   15606 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0422 10:38:55.995176   15606 kubeadm.go:391] StartCluster: {Name:addons-649657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-649657 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 10:38:55.995253   15606 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 10:38:55.995311   15606 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 10:38:56.034153   15606 cri.go:89] found id: ""
	I0422 10:38:56.034211   15606 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0422 10:38:56.044948   15606 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 10:38:56.055359   15606 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 10:38:56.065419   15606 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 10:38:56.065443   15606 kubeadm.go:156] found existing configuration files:
	
	I0422 10:38:56.065483   15606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 10:38:56.074920   15606 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 10:38:56.074982   15606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 10:38:56.084874   15606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 10:38:56.094285   15606 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 10:38:56.094339   15606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 10:38:56.104189   15606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 10:38:56.113912   15606 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 10:38:56.113971   15606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 10:38:56.123901   15606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 10:38:56.133270   15606 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 10:38:56.133336   15606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 10:38:56.143204   15606 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 10:38:56.316983   15606 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 10:39:06.100464   15606 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0422 10:39:06.100527   15606 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 10:39:06.100620   15606 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 10:39:06.100736   15606 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 10:39:06.100862   15606 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 10:39:06.100973   15606 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 10:39:06.102612   15606 out.go:204]   - Generating certificates and keys ...
	I0422 10:39:06.102701   15606 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 10:39:06.102775   15606 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 10:39:06.102858   15606 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0422 10:39:06.102937   15606 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0422 10:39:06.103029   15606 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0422 10:39:06.103108   15606 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0422 10:39:06.103214   15606 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0422 10:39:06.103383   15606 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-649657 localhost] and IPs [192.168.39.194 127.0.0.1 ::1]
	I0422 10:39:06.103473   15606 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0422 10:39:06.103630   15606 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-649657 localhost] and IPs [192.168.39.194 127.0.0.1 ::1]
	I0422 10:39:06.103720   15606 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0422 10:39:06.103803   15606 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0422 10:39:06.103856   15606 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0422 10:39:06.103906   15606 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 10:39:06.103950   15606 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 10:39:06.104018   15606 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0422 10:39:06.104106   15606 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 10:39:06.104193   15606 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 10:39:06.104276   15606 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 10:39:06.104385   15606 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 10:39:06.104476   15606 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 10:39:06.105991   15606 out.go:204]   - Booting up control plane ...
	I0422 10:39:06.106087   15606 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 10:39:06.106168   15606 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 10:39:06.106246   15606 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 10:39:06.106370   15606 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 10:39:06.106477   15606 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 10:39:06.106534   15606 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 10:39:06.106671   15606 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0422 10:39:06.106747   15606 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0422 10:39:06.106801   15606 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001813271s
	I0422 10:39:06.106863   15606 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0422 10:39:06.106914   15606 kubeadm.go:309] [api-check] The API server is healthy after 5.002917478s
	I0422 10:39:06.107009   15606 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0422 10:39:06.107112   15606 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0422 10:39:06.107164   15606 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0422 10:39:06.107338   15606 kubeadm.go:309] [mark-control-plane] Marking the node addons-649657 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0422 10:39:06.107414   15606 kubeadm.go:309] [bootstrap-token] Using token: q8pyvi.q9qr6sp0xqf6hnwc
	I0422 10:39:06.109148   15606 out.go:204]   - Configuring RBAC rules ...
	I0422 10:39:06.109267   15606 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0422 10:39:06.109357   15606 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0422 10:39:06.109516   15606 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0422 10:39:06.109629   15606 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0422 10:39:06.109727   15606 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0422 10:39:06.109821   15606 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0422 10:39:06.109953   15606 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0422 10:39:06.110016   15606 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0422 10:39:06.110088   15606 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0422 10:39:06.110099   15606 kubeadm.go:309] 
	I0422 10:39:06.110182   15606 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0422 10:39:06.110193   15606 kubeadm.go:309] 
	I0422 10:39:06.110289   15606 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0422 10:39:06.110301   15606 kubeadm.go:309] 
	I0422 10:39:06.110351   15606 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0422 10:39:06.110412   15606 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0422 10:39:06.110458   15606 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0422 10:39:06.110464   15606 kubeadm.go:309] 
	I0422 10:39:06.110512   15606 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0422 10:39:06.110518   15606 kubeadm.go:309] 
	I0422 10:39:06.110561   15606 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0422 10:39:06.110567   15606 kubeadm.go:309] 
	I0422 10:39:06.110620   15606 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0422 10:39:06.110739   15606 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0422 10:39:06.110848   15606 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0422 10:39:06.110857   15606 kubeadm.go:309] 
	I0422 10:39:06.110963   15606 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0422 10:39:06.111036   15606 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0422 10:39:06.111042   15606 kubeadm.go:309] 
	I0422 10:39:06.111121   15606 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token q8pyvi.q9qr6sp0xqf6hnwc \
	I0422 10:39:06.111207   15606 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9af0990320e7d045d6d22d7e7276e7343f7b0ca44ae792f4916b4a79d803646f \
	I0422 10:39:06.111227   15606 kubeadm.go:309] 	--control-plane 
	I0422 10:39:06.111233   15606 kubeadm.go:309] 
	I0422 10:39:06.111318   15606 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0422 10:39:06.111328   15606 kubeadm.go:309] 
	I0422 10:39:06.111410   15606 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token q8pyvi.q9qr6sp0xqf6hnwc \
	I0422 10:39:06.111542   15606 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9af0990320e7d045d6d22d7e7276e7343f7b0ca44ae792f4916b4a79d803646f 
	I0422 10:39:06.111555   15606 cni.go:84] Creating CNI manager for ""
	I0422 10:39:06.111564   15606 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 10:39:06.113398   15606 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 10:39:06.114926   15606 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 10:39:06.128525   15606 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 10:39:06.151386   15606 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 10:39:06.151454   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:06.151521   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-649657 minikube.k8s.io/updated_at=2024_04_22T10_39_06_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437 minikube.k8s.io/name=addons-649657 minikube.k8s.io/primary=true
	I0422 10:39:06.258414   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:06.327385   15606 ops.go:34] apiserver oom_adj: -16
	I0422 10:39:06.758506   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:07.258612   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:07.759209   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:08.258801   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:08.759123   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:09.259055   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:09.758838   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:10.258815   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:10.759281   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:11.258560   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:11.759140   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:12.259255   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:12.759072   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:13.258551   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:13.758787   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:14.258504   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:14.758801   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:15.259133   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:15.758741   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:16.259368   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:16.758500   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:17.259193   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:17.758622   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:17.845699   15606 kubeadm.go:1107] duration metric: took 11.694314812s to wait for elevateKubeSystemPrivileges
	W0422 10:39:17.845757   15606 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0422 10:39:17.845766   15606 kubeadm.go:393] duration metric: took 21.85059615s to StartCluster
	I0422 10:39:17.845786   15606 settings.go:142] acquiring lock: {Name:mkd680667f0df4166491741d55b55ac111bb0138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 10:39:17.845938   15606 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18711-7633/kubeconfig
	I0422 10:39:17.846325   15606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/kubeconfig: {Name:mkee6de4c6906fe5621e8aeac858a93219648db5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 10:39:17.846539   15606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0422 10:39:17.846545   15606 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 10:39:17.848521   15606 out.go:177] * Verifying Kubernetes components...
	I0422 10:39:17.846594   15606 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0422 10:39:17.846754   15606 config.go:182] Loaded profile config "addons-649657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 10:39:17.849735   15606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 10:39:17.849757   15606 addons.go:69] Setting yakd=true in profile "addons-649657"
	I0422 10:39:17.849767   15606 addons.go:69] Setting cloud-spanner=true in profile "addons-649657"
	I0422 10:39:17.849789   15606 addons.go:234] Setting addon yakd=true in "addons-649657"
	I0422 10:39:17.849801   15606 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-649657"
	I0422 10:39:17.849806   15606 addons.go:234] Setting addon cloud-spanner=true in "addons-649657"
	I0422 10:39:17.849808   15606 addons.go:69] Setting ingress-dns=true in profile "addons-649657"
	I0422 10:39:17.849810   15606 addons.go:69] Setting ingress=true in profile "addons-649657"
	I0422 10:39:17.849825   15606 host.go:66] Checking if "addons-649657" exists ...
	I0422 10:39:17.849831   15606 addons.go:234] Setting addon ingress-dns=true in "addons-649657"
	I0422 10:39:17.849837   15606 addons.go:234] Setting addon ingress=true in "addons-649657"
	I0422 10:39:17.849843   15606 addons.go:69] Setting default-storageclass=true in profile "addons-649657"
	I0422 10:39:17.849850   15606 addons.go:69] Setting helm-tiller=true in profile "addons-649657"
	I0422 10:39:17.849862   15606 host.go:66] Checking if "addons-649657" exists ...
	I0422 10:39:17.849864   15606 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-649657"
	I0422 10:39:17.849867   15606 addons.go:234] Setting addon helm-tiller=true in "addons-649657"
	I0422 10:39:17.849872   15606 host.go:66] Checking if "addons-649657" exists ...
	I0422 10:39:17.849886   15606 host.go:66] Checking if "addons-649657" exists ...
	I0422 10:39:17.850199   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.849801   15606 addons.go:69] Setting metrics-server=true in profile "addons-649657"
	I0422 10:39:17.850224   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.850234   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.850238   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.850241   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.850245   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.850245   15606 addons.go:69] Setting inspektor-gadget=true in profile "addons-649657"
	I0422 10:39:17.849838   15606 host.go:66] Checking if "addons-649657" exists ...
	I0422 10:39:17.850263   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.850251   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.850276   15606 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-649657"
	I0422 10:39:17.850226   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.850339   15606 addons.go:69] Setting registry=true in profile "addons-649657"
	I0422 10:39:17.850342   15606 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-649657"
	I0422 10:39:17.850265   15606 addons.go:234] Setting addon inspektor-gadget=true in "addons-649657"
	I0422 10:39:17.849844   15606 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-649657"
	I0422 10:39:17.850359   15606 addons.go:69] Setting volumesnapshots=true in profile "addons-649657"
	I0422 10:39:17.850357   15606 addons.go:69] Setting storage-provisioner=true in profile "addons-649657"
	I0422 10:39:17.850368   15606 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-649657"
	I0422 10:39:17.850368   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.850375   15606 addons.go:234] Setting addon volumesnapshots=true in "addons-649657"
	I0422 10:39:17.849757   15606 addons.go:69] Setting gcp-auth=true in profile "addons-649657"
	I0422 10:39:17.850384   15606 addons.go:234] Setting addon storage-provisioner=true in "addons-649657"
	I0422 10:39:17.850238   15606 addons.go:234] Setting addon metrics-server=true in "addons-649657"
	I0422 10:39:17.850392   15606 mustload.go:65] Loading cluster: addons-649657
	I0422 10:39:17.850360   15606 addons.go:234] Setting addon registry=true in "addons-649657"
	I0422 10:39:17.850385   15606 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-649657"
	I0422 10:39:17.850562   15606 host.go:66] Checking if "addons-649657" exists ...
	I0422 10:39:17.850572   15606 host.go:66] Checking if "addons-649657" exists ...
	I0422 10:39:17.850588   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.850607   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.850633   15606 host.go:66] Checking if "addons-649657" exists ...
	I0422 10:39:17.850674   15606 host.go:66] Checking if "addons-649657" exists ...
	I0422 10:39:17.850565   15606 host.go:66] Checking if "addons-649657" exists ...
	I0422 10:39:17.851115   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.851144   15606 config.go:182] Loaded profile config "addons-649657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 10:39:17.851166   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.851190   15606 host.go:66] Checking if "addons-649657" exists ...
	I0422 10:39:17.851213   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.851239   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.851262   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.851245   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.851316   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.851218   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.851350   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.851319   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.851401   15606 host.go:66] Checking if "addons-649657" exists ...
	I0422 10:39:17.851520   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.851192   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.851688   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.851570   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.851815   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.851879   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.851915   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.851666   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.867526   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40427
	I0422 10:39:17.870809   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43423
	I0422 10:39:17.877088   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40447
	I0422 10:39:17.877144   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44619
	I0422 10:39:17.877956   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.878077   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.878141   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.878205   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.879764   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.879783   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.879918   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.879930   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.880050   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.880063   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.880188   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.880202   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.881595   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.881637   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.881599   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.881710   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.882214   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.882268   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.882538   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.882556   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.883034   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.883053   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.882219   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.883250   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.906369   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34947
	I0422 10:39:17.906779   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.908847   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36243
	I0422 10:39:17.909274   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.909292   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.909733   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.910062   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.910236   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44113
	I0422 10:39:17.910761   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.910793   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.911009   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.911172   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.911183   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.911519   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.912001   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.912022   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.918022   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38705
	I0422 10:39:17.918123   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.918143   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.918202   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37223
	I0422 10:39:17.918551   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.919128   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.919150   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.919208   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.919397   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:17.919458   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.919889   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39239
	I0422 10:39:17.920071   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.920082   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.920645   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.920680   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44441
	I0422 10:39:17.920710   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.921229   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.921392   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.921404   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.921768   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.921781   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.921832   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.921841   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.921861   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.922161   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.922311   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:17.922511   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.922529   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.922538   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39401
	I0422 10:39:17.923342   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.923409   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:39:17.923451   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39629
	I0422 10:39:17.925844   15606 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0422 10:39:17.927368   15606 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0422 10:39:17.927392   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0422 10:39:17.927413   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:39:17.925883   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.925823   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.927553   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.925456   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.927510   15606 addons.go:234] Setting addon default-storageclass=true in "addons-649657"
	I0422 10:39:17.927662   15606 host.go:66] Checking if "addons-649657" exists ...
	I0422 10:39:17.928000   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.928035   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.928852   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.928879   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.929018   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.929037   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.929451   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.929517   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.929732   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:17.931243   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:39:17.933109   15606 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0422 10:39:17.933224   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:17.933259   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.933894   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:39:17.934561   15606 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0422 10:39:17.934655   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.935894   15606 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0422 10:39:17.937376   15606 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0422 10:39:17.937395   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0422 10:39:17.937412   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:39:17.935920   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46093
	I0422 10:39:17.934760   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:39:17.934676   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:39:17.937595   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:17.938280   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:39:17.938492   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:39:17.939085   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.939625   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.939642   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.939991   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.940257   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:17.940506   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:17.941209   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:39:17.941225   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:39:17.941247   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:17.941390   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:39:17.941512   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:39:17.941623   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:39:17.942535   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38427
	I0422 10:39:17.943427   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.944693   15606 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-649657"
	I0422 10:39:17.944737   15606 host.go:66] Checking if "addons-649657" exists ...
	I0422 10:39:17.945126   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.945159   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.945364   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43735
	I0422 10:39:17.945767   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.945784   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.946211   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.946799   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.946832   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.947060   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.947077   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33721
	I0422 10:39:17.947418   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.947492   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.947504   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.948290   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.948474   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:17.949545   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35923
	I0422 10:39:17.950032   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.950047   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.950124   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.950360   15606 host.go:66] Checking if "addons-649657" exists ...
	I0422 10:39:17.950755   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.950790   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.951068   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.951081   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.951479   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.952018   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.952061   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.952262   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.952446   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:17.954411   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:39:17.956873   15606 out.go:177]   - Using image docker.io/registry:2.8.3
	I0422 10:39:17.958348   15606 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0422 10:39:17.959549   15606 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0422 10:39:17.959571   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0422 10:39:17.959594   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:39:17.963397   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:17.963787   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:39:17.963811   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:17.964058   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38617
	I0422 10:39:17.964207   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:39:17.964457   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:39:17.964539   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.964645   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:39:17.964916   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:39:17.965319   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.965339   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.966377   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.966544   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:17.968085   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:39:17.970030   15606 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0422 10:39:17.971484   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45461
	I0422 10:39:17.971493   15606 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0422 10:39:17.971510   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0422 10:39:17.971530   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:39:17.971610   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45485
	I0422 10:39:17.972347   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.974863   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:17.975386   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:39:17.975409   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:17.975714   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.975729   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.975798   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:39:17.975845   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.976076   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.976125   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:39:17.976231   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:17.976277   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:39:17.976419   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:39:17.977187   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40319
	I0422 10:39:17.977993   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.978012   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.978257   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.978338   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:39:17.978554   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.980499   15606 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0422 10:39:17.978719   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:17.979538   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.980035   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42665
	I0422 10:39:17.981751   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43855
	I0422 10:39:17.981830   15606 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0422 10:39:17.981843   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0422 10:39:17.981861   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:39:17.981920   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.981927   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35651
	I0422 10:39:17.982143   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.982658   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.982729   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35751
	I0422 10:39:17.982831   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.983098   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.983560   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.983579   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.983638   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.983746   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.983757   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.983882   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.983892   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.984046   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.984064   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.984513   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.984516   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.984567   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.984545   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35031
	I0422 10:39:17.984985   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.985024   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.985766   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:17.985826   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:17.985867   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.985927   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.986080   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.986116   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.986729   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.986747   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.987116   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.987296   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:17.987790   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:17.987891   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:39:17.988088   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:39:17.989534   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:39:17.989594   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:39:17.989608   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:17.989634   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:39:17.989675   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:39:17.989712   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:39:17.989873   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:39:17.991637   15606 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0422 10:39:17.992801   15606 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0422 10:39:17.989997   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:39:17.990074   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35593
	I0422 10:39:17.990587   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37483
	I0422 10:39:17.991021   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:39:17.992765   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44621
	I0422 10:39:17.995166   15606 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0422 10:39:17.995525   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.995920   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.996350   15606 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0422 10:39:17.996730   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.997805   15606 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0422 10:39:17.998377   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.999145   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0422 10:39:17.999148   15606 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0422 10:39:17.999569   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:18.000343   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:18.000361   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:18.000393   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:39:18.002095   15606 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0422 10:39:18.000409   15606 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0422 10:39:17.999709   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:18.000705   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:18.000723   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:18.002130   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0422 10:39:18.002147   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:18.003950   15606 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0422 10:39:18.004146   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:18.005070   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:39:18.004170   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:18.004192   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.006531   15606 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0422 10:39:18.005136   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:39:18.005173   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0422 10:39:18.004694   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:39:18.005889   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:18.006135   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43893
	I0422 10:39:18.006970   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:39:18.007182   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:39:18.009101   15606 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0422 10:39:18.010356   15606 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0422 10:39:18.008155   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:39:18.008163   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.008423   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:39:18.008446   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.008457   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:18.008721   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:18.008894   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:39:18.010420   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35887
	I0422 10:39:18.011677   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:39:18.012276   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:39:18.012903   15606 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0422 10:39:18.013276   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:39:18.014251   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.013286   15606 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0422 10:39:18.015681   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:39:18.017143   15606 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0422 10:39:18.017166   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0422 10:39:18.017182   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:39:18.013661   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:18.017203   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:18.014009   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:18.014294   15606 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0422 10:39:18.013493   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:39:18.014435   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:39:18.015047   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.015795   15606 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0422 10:39:18.015839   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:39:18.017593   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:18.018533   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:18.018670   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:39:18.019753   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:39:18.019773   15606 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 10:39:18.019825   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0422 10:39:18.020076   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:18.020080   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:39:18.021120   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:18.020232   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.021143   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.022498   15606 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 10:39:18.022513   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0422 10:39:18.021111   15606 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0422 10:39:18.023851   15606 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0422 10:39:18.023869   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0422 10:39:18.023884   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:39:18.022530   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:39:18.020859   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:39:18.021168   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:39:18.021347   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:39:18.021376   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:39:18.021523   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:18.022592   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:39:18.021163   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:39:18.024183   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.025795   15606 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0422 10:39:18.024688   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:39:18.025260   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:18.028635   15606 out.go:177]   - Using image docker.io/busybox:stable
	I0422 10:39:18.027603   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.027668   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:39:18.028688   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:39:18.028254   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.028328   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:39:18.028712   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.030138   15606 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0422 10:39:18.030154   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0422 10:39:18.030171   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:39:18.028941   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:39:18.030201   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:39:18.029221   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:39:18.029712   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.029842   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:39:18.029996   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:39:18.030257   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.030318   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:39:18.030334   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.030418   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:39:18.030599   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:39:18.030614   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:39:18.030679   15606 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0422 10:39:18.030691   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0422 10:39:18.030708   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:39:18.030742   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:39:18.030882   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:39:18.030899   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:39:18.030937   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:39:18.031283   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:39:18.031454   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	W0422 10:39:18.032687   15606 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:37526->192.168.39.194:22: read: connection reset by peer
	I0422 10:39:18.032714   15606 retry.go:31] will retry after 324.459983ms: ssh: handshake failed: read tcp 192.168.39.1:37526->192.168.39.194:22: read: connection reset by peer
	I0422 10:39:18.033688   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.033714   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.034002   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:39:18.034020   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.034080   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:39:18.034099   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.034130   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:39:18.034246   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:39:18.034299   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:39:18.034342   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:39:18.034425   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:39:18.034470   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:39:18.034708   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:39:18.034859   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	W0422 10:39:18.035321   15606 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0422 10:39:18.035339   15606 retry.go:31] will retry after 285.480819ms: ssh: handshake failed: EOF
	I0422 10:39:18.268023   15606 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 10:39:18.268038   15606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0422 10:39:18.296721   15606 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0422 10:39:18.296739   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0422 10:39:18.324868   15606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0422 10:39:18.359660   15606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0422 10:39:18.365196   15606 node_ready.go:35] waiting up to 6m0s for node "addons-649657" to be "Ready" ...
	I0422 10:39:18.368087   15606 node_ready.go:49] node "addons-649657" has status "Ready":"True"
	I0422 10:39:18.368105   15606 node_ready.go:38] duration metric: took 2.88419ms for node "addons-649657" to be "Ready" ...
	I0422 10:39:18.368113   15606 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 10:39:18.373521   15606 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-649657" in "kube-system" namespace to be "Ready" ...
	I0422 10:39:18.379624   15606 pod_ready.go:92] pod "etcd-addons-649657" in "kube-system" namespace has status "Ready":"True"
	I0422 10:39:18.379645   15606 pod_ready.go:81] duration metric: took 6.103757ms for pod "etcd-addons-649657" in "kube-system" namespace to be "Ready" ...
	I0422 10:39:18.379653   15606 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-649657" in "kube-system" namespace to be "Ready" ...
	I0422 10:39:18.384269   15606 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0422 10:39:18.384287   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0422 10:39:18.390003   15606 pod_ready.go:92] pod "kube-apiserver-addons-649657" in "kube-system" namespace has status "Ready":"True"
	I0422 10:39:18.390022   15606 pod_ready.go:81] duration metric: took 10.364034ms for pod "kube-apiserver-addons-649657" in "kube-system" namespace to be "Ready" ...
	I0422 10:39:18.390035   15606 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-649657" in "kube-system" namespace to be "Ready" ...
	I0422 10:39:18.401422   15606 pod_ready.go:92] pod "kube-controller-manager-addons-649657" in "kube-system" namespace has status "Ready":"True"
	I0422 10:39:18.401441   15606 pod_ready.go:81] duration metric: took 11.400738ms for pod "kube-controller-manager-addons-649657" in "kube-system" namespace to be "Ready" ...
	I0422 10:39:18.401450   15606 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-649657" in "kube-system" namespace to be "Ready" ...
	I0422 10:39:18.414766   15606 pod_ready.go:92] pod "kube-scheduler-addons-649657" in "kube-system" namespace has status "Ready":"True"
	I0422 10:39:18.414792   15606 pod_ready.go:81] duration metric: took 13.3369ms for pod "kube-scheduler-addons-649657" in "kube-system" namespace to be "Ready" ...
	I0422 10:39:18.414800   15606 pod_ready.go:38] duration metric: took 46.677237ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 10:39:18.414813   15606 api_server.go:52] waiting for apiserver process to appear ...
	I0422 10:39:18.414854   15606 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 10:39:18.415929   15606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 10:39:18.418845   15606 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0422 10:39:18.418870   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0422 10:39:18.463448   15606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0422 10:39:18.507667   15606 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0422 10:39:18.507690   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0422 10:39:18.597821   15606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0422 10:39:18.600736   15606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0422 10:39:18.603482   15606 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0422 10:39:18.603501   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0422 10:39:18.675658   15606 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0422 10:39:18.675687   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0422 10:39:18.678536   15606 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0422 10:39:18.678560   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0422 10:39:18.740553   15606 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0422 10:39:18.740582   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0422 10:39:18.821647   15606 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 10:39:18.821676   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0422 10:39:18.856356   15606 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0422 10:39:18.856382   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0422 10:39:18.866663   15606 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0422 10:39:18.866691   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0422 10:39:18.934541   15606 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0422 10:39:18.934565   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0422 10:39:18.964434   15606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0422 10:39:19.040346   15606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 10:39:19.082294   15606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0422 10:39:19.188453   15606 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0422 10:39:19.188480   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0422 10:39:19.217582   15606 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0422 10:39:19.217613   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0422 10:39:19.225114   15606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 10:39:19.231792   15606 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0422 10:39:19.231819   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0422 10:39:19.253914   15606 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0422 10:39:19.253937   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0422 10:39:19.587300   15606 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0422 10:39:19.587326   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0422 10:39:19.639707   15606 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0422 10:39:19.639730   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0422 10:39:19.644485   15606 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0422 10:39:19.644505   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0422 10:39:19.719319   15606 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0422 10:39:19.719343   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0422 10:39:19.884581   15606 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0422 10:39:19.884607   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0422 10:39:19.993721   15606 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0422 10:39:19.993755   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0422 10:39:20.065847   15606 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0422 10:39:20.065876   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0422 10:39:20.313543   15606 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0422 10:39:20.313564   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0422 10:39:20.315885   15606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0422 10:39:20.319011   15606 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0422 10:39:20.319033   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0422 10:39:20.400315   15606 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0422 10:39:20.400346   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0422 10:39:20.732222   15606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0422 10:39:20.733654   15606 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0422 10:39:20.733671   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0422 10:39:20.758956   15606 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0422 10:39:20.758981   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0422 10:39:20.898686   15606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.57378809s)
	I0422 10:39:20.898740   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:20.898748   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:20.898685   15606 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.630617884s)
	I0422 10:39:20.898808   15606 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0422 10:39:20.899033   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:20.899096   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:20.899106   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:20.899121   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:20.899130   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:20.899354   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:20.899394   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:20.899418   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:21.189502   15606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0422 10:39:21.221478   15606 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0422 10:39:21.221506   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0422 10:39:21.402736   15606 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-649657" context rescaled to 1 replicas
	I0422 10:39:21.611303   15606 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0422 10:39:21.611328   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0422 10:39:21.851268   15606 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0422 10:39:21.851299   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0422 10:39:21.888075   15606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.528376597s)
	I0422 10:39:21.888131   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:21.888143   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:21.888168   15606 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.473295021s)
	I0422 10:39:21.888201   15606 api_server.go:72] duration metric: took 4.0416321s to wait for apiserver process to appear ...
	I0422 10:39:21.888212   15606 api_server.go:88] waiting for apiserver healthz status ...
	I0422 10:39:21.888232   15606 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I0422 10:39:21.888431   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:21.888450   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:21.888462   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:21.888478   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:21.888486   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:21.888700   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:21.888726   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:21.916570   15606 api_server.go:279] https://192.168.39.194:8443/healthz returned 200:
	ok
	I0422 10:39:21.920624   15606 api_server.go:141] control plane version: v1.30.0
	I0422 10:39:21.920647   15606 api_server.go:131] duration metric: took 32.4294ms to wait for apiserver health ...
	I0422 10:39:21.920655   15606 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 10:39:21.933513   15606 system_pods.go:59] 8 kube-system pods found
	I0422 10:39:21.933550   15606 system_pods.go:61] "coredns-7db6d8ff4d-2mxqp" [aa2ffe62-c568-4ca9-b23a-2976185dc0c0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 10:39:21.933560   15606 system_pods.go:61] "coredns-7db6d8ff4d-tlwhf" [8980ac23-fb3e-457f-b6bb-b238465edfbd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 10:39:21.933567   15606 system_pods.go:61] "etcd-addons-649657" [32390e86-daf9-4070-978d-3fe7fe2f42ca] Running
	I0422 10:39:21.933574   15606 system_pods.go:61] "kube-apiserver-addons-649657" [a13ce6af-f99b-4f74-beb5-e99cb393909e] Running
	I0422 10:39:21.933579   15606 system_pods.go:61] "kube-controller-manager-addons-649657" [add2d793-686a-4b81-8b08-fc8d4dd539bb] Running
	I0422 10:39:21.933592   15606 system_pods.go:61] "kube-proxy-hlgg9" [478bfbcb-c8d1-4a0b-b13c-84e8892d1d3e] Running
	I0422 10:39:21.933597   15606 system_pods.go:61] "kube-scheduler-addons-649657" [d9bdd843-f8d4-45a9-977d-bed508686f8f] Running
	I0422 10:39:21.933606   15606 system_pods.go:61] "nvidia-device-plugin-daemonset-w4vxc" [3bfb0bd5-3242-4f72-9f7c-0c79543badd2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0422 10:39:21.933619   15606 system_pods.go:74] duration metric: took 12.958213ms to wait for pod list to return data ...
	I0422 10:39:21.933632   15606 default_sa.go:34] waiting for default service account to be created ...
	I0422 10:39:21.948729   15606 default_sa.go:45] found service account: "default"
	I0422 10:39:21.948758   15606 default_sa.go:55] duration metric: took 15.115185ms for default service account to be created ...
	I0422 10:39:21.948769   15606 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 10:39:21.954167   15606 system_pods.go:86] 8 kube-system pods found
	I0422 10:39:21.954195   15606 system_pods.go:89] "coredns-7db6d8ff4d-2mxqp" [aa2ffe62-c568-4ca9-b23a-2976185dc0c0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 10:39:21.954204   15606 system_pods.go:89] "coredns-7db6d8ff4d-tlwhf" [8980ac23-fb3e-457f-b6bb-b238465edfbd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 10:39:21.954210   15606 system_pods.go:89] "etcd-addons-649657" [32390e86-daf9-4070-978d-3fe7fe2f42ca] Running
	I0422 10:39:21.954214   15606 system_pods.go:89] "kube-apiserver-addons-649657" [a13ce6af-f99b-4f74-beb5-e99cb393909e] Running
	I0422 10:39:21.954218   15606 system_pods.go:89] "kube-controller-manager-addons-649657" [add2d793-686a-4b81-8b08-fc8d4dd539bb] Running
	I0422 10:39:21.954222   15606 system_pods.go:89] "kube-proxy-hlgg9" [478bfbcb-c8d1-4a0b-b13c-84e8892d1d3e] Running
	I0422 10:39:21.954226   15606 system_pods.go:89] "kube-scheduler-addons-649657" [d9bdd843-f8d4-45a9-977d-bed508686f8f] Running
	I0422 10:39:21.954231   15606 system_pods.go:89] "nvidia-device-plugin-daemonset-w4vxc" [3bfb0bd5-3242-4f72-9f7c-0c79543badd2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0422 10:39:21.954242   15606 retry.go:31] will retry after 201.780034ms: missing components: kube-dns
	I0422 10:39:22.176121   15606 system_pods.go:86] 9 kube-system pods found
	I0422 10:39:22.176162   15606 system_pods.go:89] "coredns-7db6d8ff4d-2mxqp" [aa2ffe62-c568-4ca9-b23a-2976185dc0c0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 10:39:22.176174   15606 system_pods.go:89] "coredns-7db6d8ff4d-tlwhf" [8980ac23-fb3e-457f-b6bb-b238465edfbd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 10:39:22.176182   15606 system_pods.go:89] "etcd-addons-649657" [32390e86-daf9-4070-978d-3fe7fe2f42ca] Running
	I0422 10:39:22.176192   15606 system_pods.go:89] "kube-apiserver-addons-649657" [a13ce6af-f99b-4f74-beb5-e99cb393909e] Running
	I0422 10:39:22.176198   15606 system_pods.go:89] "kube-controller-manager-addons-649657" [add2d793-686a-4b81-8b08-fc8d4dd539bb] Running
	I0422 10:39:22.176205   15606 system_pods.go:89] "kube-ingress-dns-minikube" [a8f74405-5f73-4306-a4ca-244216a00b42] Pending
	I0422 10:39:22.176210   15606 system_pods.go:89] "kube-proxy-hlgg9" [478bfbcb-c8d1-4a0b-b13c-84e8892d1d3e] Running
	I0422 10:39:22.176219   15606 system_pods.go:89] "kube-scheduler-addons-649657" [d9bdd843-f8d4-45a9-977d-bed508686f8f] Running
	I0422 10:39:22.176227   15606 system_pods.go:89] "nvidia-device-plugin-daemonset-w4vxc" [3bfb0bd5-3242-4f72-9f7c-0c79543badd2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0422 10:39:22.176250   15606 retry.go:31] will retry after 242.480405ms: missing components: kube-dns
	I0422 10:39:22.486107   15606 system_pods.go:86] 9 kube-system pods found
	I0422 10:39:22.486145   15606 system_pods.go:89] "coredns-7db6d8ff4d-2mxqp" [aa2ffe62-c568-4ca9-b23a-2976185dc0c0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 10:39:22.486156   15606 system_pods.go:89] "coredns-7db6d8ff4d-tlwhf" [8980ac23-fb3e-457f-b6bb-b238465edfbd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 10:39:22.486164   15606 system_pods.go:89] "etcd-addons-649657" [32390e86-daf9-4070-978d-3fe7fe2f42ca] Running
	I0422 10:39:22.486173   15606 system_pods.go:89] "kube-apiserver-addons-649657" [a13ce6af-f99b-4f74-beb5-e99cb393909e] Running
	I0422 10:39:22.486180   15606 system_pods.go:89] "kube-controller-manager-addons-649657" [add2d793-686a-4b81-8b08-fc8d4dd539bb] Running
	I0422 10:39:22.486190   15606 system_pods.go:89] "kube-ingress-dns-minikube" [a8f74405-5f73-4306-a4ca-244216a00b42] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0422 10:39:22.486196   15606 system_pods.go:89] "kube-proxy-hlgg9" [478bfbcb-c8d1-4a0b-b13c-84e8892d1d3e] Running
	I0422 10:39:22.486203   15606 system_pods.go:89] "kube-scheduler-addons-649657" [d9bdd843-f8d4-45a9-977d-bed508686f8f] Running
	I0422 10:39:22.486217   15606 system_pods.go:89] "nvidia-device-plugin-daemonset-w4vxc" [3bfb0bd5-3242-4f72-9f7c-0c79543badd2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0422 10:39:22.486234   15606 retry.go:31] will retry after 479.404499ms: missing components: kube-dns
	I0422 10:39:22.499410   15606 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0422 10:39:22.499436   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0422 10:39:22.533062   15606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.117095596s)
	I0422 10:39:22.533125   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:22.533137   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:22.533454   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:22.533502   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:22.533522   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:22.533538   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:22.533549   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:22.533813   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:22.533868   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:22.533880   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:22.654717   15606 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0422 10:39:22.654743   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0422 10:39:22.982050   15606 system_pods.go:86] 10 kube-system pods found
	I0422 10:39:22.982083   15606 system_pods.go:89] "coredns-7db6d8ff4d-2mxqp" [aa2ffe62-c568-4ca9-b23a-2976185dc0c0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 10:39:22.982110   15606 system_pods.go:89] "coredns-7db6d8ff4d-tlwhf" [8980ac23-fb3e-457f-b6bb-b238465edfbd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 10:39:22.982121   15606 system_pods.go:89] "etcd-addons-649657" [32390e86-daf9-4070-978d-3fe7fe2f42ca] Running
	I0422 10:39:22.982129   15606 system_pods.go:89] "kube-apiserver-addons-649657" [a13ce6af-f99b-4f74-beb5-e99cb393909e] Running
	I0422 10:39:22.982138   15606 system_pods.go:89] "kube-controller-manager-addons-649657" [add2d793-686a-4b81-8b08-fc8d4dd539bb] Running
	I0422 10:39:22.982151   15606 system_pods.go:89] "kube-ingress-dns-minikube" [a8f74405-5f73-4306-a4ca-244216a00b42] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0422 10:39:22.982167   15606 system_pods.go:89] "kube-proxy-hlgg9" [478bfbcb-c8d1-4a0b-b13c-84e8892d1d3e] Running
	I0422 10:39:22.982175   15606 system_pods.go:89] "kube-scheduler-addons-649657" [d9bdd843-f8d4-45a9-977d-bed508686f8f] Running
	I0422 10:39:22.982185   15606 system_pods.go:89] "nvidia-device-plugin-daemonset-w4vxc" [3bfb0bd5-3242-4f72-9f7c-0c79543badd2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0422 10:39:22.982199   15606 system_pods.go:89] "storage-provisioner" [3f7923bd-3f6b-44d8-846c-ed7eee65a6df] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0422 10:39:22.982221   15606 retry.go:31] will retry after 560.513153ms: missing components: kube-dns
	I0422 10:39:23.029058   15606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0422 10:39:23.706759   15606 system_pods.go:86] 11 kube-system pods found
	I0422 10:39:23.706791   15606 system_pods.go:89] "coredns-7db6d8ff4d-2mxqp" [aa2ffe62-c568-4ca9-b23a-2976185dc0c0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 10:39:23.706798   15606 system_pods.go:89] "coredns-7db6d8ff4d-tlwhf" [8980ac23-fb3e-457f-b6bb-b238465edfbd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 10:39:23.706804   15606 system_pods.go:89] "etcd-addons-649657" [32390e86-daf9-4070-978d-3fe7fe2f42ca] Running
	I0422 10:39:23.706809   15606 system_pods.go:89] "kube-apiserver-addons-649657" [a13ce6af-f99b-4f74-beb5-e99cb393909e] Running
	I0422 10:39:23.706813   15606 system_pods.go:89] "kube-controller-manager-addons-649657" [add2d793-686a-4b81-8b08-fc8d4dd539bb] Running
	I0422 10:39:23.706819   15606 system_pods.go:89] "kube-ingress-dns-minikube" [a8f74405-5f73-4306-a4ca-244216a00b42] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0422 10:39:23.706824   15606 system_pods.go:89] "kube-proxy-hlgg9" [478bfbcb-c8d1-4a0b-b13c-84e8892d1d3e] Running
	I0422 10:39:23.706828   15606 system_pods.go:89] "kube-scheduler-addons-649657" [d9bdd843-f8d4-45a9-977d-bed508686f8f] Running
	I0422 10:39:23.706834   15606 system_pods.go:89] "nvidia-device-plugin-daemonset-w4vxc" [3bfb0bd5-3242-4f72-9f7c-0c79543badd2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0422 10:39:23.706838   15606 system_pods.go:89] "registry-nqc7x" [b64590e0-a02f-45d2-8f1e-198288db17c6] Pending
	I0422 10:39:23.706843   15606 system_pods.go:89] "storage-provisioner" [3f7923bd-3f6b-44d8-846c-ed7eee65a6df] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0422 10:39:23.706855   15606 retry.go:31] will retry after 487.757207ms: missing components: kube-dns
	I0422 10:39:24.299496   15606 system_pods.go:86] 14 kube-system pods found
	I0422 10:39:24.299537   15606 system_pods.go:89] "coredns-7db6d8ff4d-2mxqp" [aa2ffe62-c568-4ca9-b23a-2976185dc0c0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 10:39:24.299548   15606 system_pods.go:89] "coredns-7db6d8ff4d-tlwhf" [8980ac23-fb3e-457f-b6bb-b238465edfbd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 10:39:24.299562   15606 system_pods.go:89] "etcd-addons-649657" [32390e86-daf9-4070-978d-3fe7fe2f42ca] Running
	I0422 10:39:24.299568   15606 system_pods.go:89] "kube-apiserver-addons-649657" [a13ce6af-f99b-4f74-beb5-e99cb393909e] Running
	I0422 10:39:24.299573   15606 system_pods.go:89] "kube-controller-manager-addons-649657" [add2d793-686a-4b81-8b08-fc8d4dd539bb] Running
	I0422 10:39:24.299580   15606 system_pods.go:89] "kube-ingress-dns-minikube" [a8f74405-5f73-4306-a4ca-244216a00b42] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0422 10:39:24.299586   15606 system_pods.go:89] "kube-proxy-hlgg9" [478bfbcb-c8d1-4a0b-b13c-84e8892d1d3e] Running
	I0422 10:39:24.299593   15606 system_pods.go:89] "kube-scheduler-addons-649657" [d9bdd843-f8d4-45a9-977d-bed508686f8f] Running
	I0422 10:39:24.299600   15606 system_pods.go:89] "metrics-server-c59844bb4-phnbq" [ce74ad1e-3a35-470e-962e-901dcdc84a6d] Pending
	I0422 10:39:24.299611   15606 system_pods.go:89] "nvidia-device-plugin-daemonset-w4vxc" [3bfb0bd5-3242-4f72-9f7c-0c79543badd2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0422 10:39:24.299621   15606 system_pods.go:89] "registry-nqc7x" [b64590e0-a02f-45d2-8f1e-198288db17c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0422 10:39:24.299636   15606 system_pods.go:89] "registry-proxy-kvfwc" [8ff782c8-8bc1-4ee5-96c7-36c9b42dd909] Pending
	I0422 10:39:24.299645   15606 system_pods.go:89] "storage-provisioner" [3f7923bd-3f6b-44d8-846c-ed7eee65a6df] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0422 10:39:24.299654   15606 system_pods.go:89] "tiller-deploy-6677d64bcd-6gjgv" [8fff0c69-9c68-4af8-962b-aa26874d6504] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0422 10:39:24.299664   15606 system_pods.go:126] duration metric: took 2.3508754s to wait for k8s-apps to be running ...
	I0422 10:39:24.299676   15606 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 10:39:24.299730   15606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 10:39:25.008742   15606 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0422 10:39:25.008789   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:39:25.012108   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:25.012519   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:39:25.012553   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:25.012754   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:39:25.013004   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:39:25.013183   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:39:25.013396   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:39:25.300400   15606 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0422 10:39:25.363746   15606 addons.go:234] Setting addon gcp-auth=true in "addons-649657"
	I0422 10:39:25.363804   15606 host.go:66] Checking if "addons-649657" exists ...
	I0422 10:39:25.364109   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:25.364136   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:25.378400   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34151
	I0422 10:39:25.378800   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:25.379302   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:25.379335   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:25.379645   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:25.380256   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:25.380284   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:25.395868   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33355
	I0422 10:39:25.396314   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:25.396838   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:25.396865   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:25.397165   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:25.397370   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:25.399054   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:39:25.399268   15606 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0422 10:39:25.399290   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:39:25.401851   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:25.402216   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:39:25.402242   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:25.402403   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:39:25.402563   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:39:25.402708   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:39:25.402868   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:39:27.783212   15606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.319725015s)
	I0422 10:39:27.783274   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.783287   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.783285   15606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.185427025s)
	I0422 10:39:27.783325   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.783342   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.783381   15606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.818914555s)
	I0422 10:39:27.783414   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.783327   15606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.182561698s)
	I0422 10:39:27.783430   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.783447   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.783461   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.783505   15606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.743122797s)
	I0422 10:39:27.783535   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.783547   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.783625   15606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.701301891s)
	I0422 10:39:27.783647   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.783658   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.783685   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.783712   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.783729   15606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.558591613s)
	I0422 10:39:27.783737   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.783745   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.783753   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.783761   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.783773   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.783782   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.783846   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.783845   15606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.467931612s)
	I0422 10:39:27.783847   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.783864   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.783872   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.783881   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.783889   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.783889   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.783909   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.783873   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.783934   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.783947   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.783956   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.783981   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.784042   15606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.594505418s)
	I0422 10:39:27.784077   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.784091   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.784051   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.784154   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.784163   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.784171   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.783937   15606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.051686648s)
	W0422 10:39:27.784217   15606 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0422 10:39:27.784235   15606 retry.go:31] will retry after 158.301195ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0422 10:39:27.784287   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.784298   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.784311   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.784319   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.784343   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.784383   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.784398   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.784416   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.784417   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.784423   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.784428   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.784431   15606 addons.go:470] Verifying addon ingress=true in "addons-649657"
	I0422 10:39:27.784437   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.784447   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.788055   15606 out.go:177] * Verifying ingress addon...
	I0422 10:39:27.784515   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.784538   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.785301   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.785325   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.785340   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.785355   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.785369   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.785387   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.785399   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.785418   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.785431   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.785448   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.785484   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.786588   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.786609   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.789373   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.789387   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.789390   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.789407   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.789417   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.789438   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.789447   15606 addons.go:470] Verifying addon registry=true in "addons-649657"
	I0422 10:39:27.789474   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.789488   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.789497   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.789498   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.789509   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.789509   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.789516   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.791523   15606 out.go:177] * Verifying registry addon...
	I0422 10:39:27.789590   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.789770   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.789789   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.789817   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.789842   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.789860   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.789859   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.790283   15606 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0422 10:39:27.792805   15606 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-649657 service yakd-dashboard -n yakd-dashboard
	
	I0422 10:39:27.792853   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.792864   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.793897   15606 addons.go:470] Verifying addon metrics-server=true in "addons-649657"
	I0422 10:39:27.792880   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.793652   15606 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0422 10:39:27.855511   15606 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0422 10:39:27.855534   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:27.856217   15606 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0422 10:39:27.856233   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:27.875605   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.875630   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.875965   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.876006   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.876014   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	W0422 10:39:27.876093   15606 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0422 10:39:27.883802   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.883829   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.884125   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.884146   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.884157   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.943389   15606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0422 10:39:28.299257   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:28.299823   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:28.801199   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:28.804793   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:29.298871   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:29.300684   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:29.802138   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:29.802287   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:30.242588   15606 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (5.942836611s)
	I0422 10:39:30.242614   15606 system_svc.go:56] duration metric: took 5.942935897s WaitForService to wait for kubelet
	I0422 10:39:30.242622   15606 kubeadm.go:576] duration metric: took 12.396053479s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 10:39:30.242638   15606 node_conditions.go:102] verifying NodePressure condition ...
	I0422 10:39:30.242593   15606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.21349134s)
	I0422 10:39:30.242664   15606 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.843379723s)
	I0422 10:39:30.242696   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:30.242715   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:30.244302   15606 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0422 10:39:30.243051   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:30.243088   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:30.245924   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:30.247284   15606 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0422 10:39:30.245942   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:30.248471   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:30.248520   15606 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0422 10:39:30.248541   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0422 10:39:30.248707   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:30.248759   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:30.248788   15606 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-649657"
	I0422 10:39:30.248738   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:30.250241   15606 out.go:177] * Verifying csi-hostpath-driver addon...
	I0422 10:39:30.252590   15606 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0422 10:39:30.258210   15606 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 10:39:30.258251   15606 node_conditions.go:123] node cpu capacity is 2
	I0422 10:39:30.258263   15606 node_conditions.go:105] duration metric: took 15.621056ms to run NodePressure ...
	I0422 10:39:30.258277   15606 start.go:240] waiting for startup goroutines ...
	I0422 10:39:30.265074   15606 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0422 10:39:30.265093   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:30.296984   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:30.299915   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:30.419621   15606 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0422 10:39:30.419650   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0422 10:39:30.471923   15606 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0422 10:39:30.471950   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0422 10:39:30.526450   15606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0422 10:39:30.547580   15606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.604143349s)
	I0422 10:39:30.547641   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:30.547658   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:30.547924   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:30.547945   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:30.547953   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:30.547961   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:30.547966   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:30.548269   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:30.548305   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:30.548311   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:30.758827   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:30.797700   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:30.801195   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:31.271724   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:31.300487   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:31.300613   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:31.761639   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:31.815098   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:31.819575   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:32.240516   15606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.714017942s)
	I0422 10:39:32.240564   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:32.240577   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:32.240858   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:32.240930   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:32.240949   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:32.240984   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:32.240996   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:32.241309   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:32.241358   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:32.241371   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:32.242858   15606 addons.go:470] Verifying addon gcp-auth=true in "addons-649657"
	I0422 10:39:32.244792   15606 out.go:177] * Verifying gcp-auth addon...
	I0422 10:39:32.246984   15606 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0422 10:39:32.266191   15606 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0422 10:39:32.266209   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:32.267080   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:32.300005   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:32.300196   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:32.751256   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:32.761955   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:32.797859   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:32.799616   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:33.265982   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:33.267181   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:33.297656   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:33.302788   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:33.751461   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:33.757816   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:33.798902   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:33.800497   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:34.250839   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:34.258828   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:34.297755   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:34.301280   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:34.750813   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:34.758807   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:34.796724   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:34.799756   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:35.251169   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:35.258406   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:35.298627   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:35.299001   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:35.750876   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:35.758841   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:35.797734   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:35.800889   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:36.253582   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:36.269805   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:36.301799   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:36.311060   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:36.750961   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:36.757145   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:36.797672   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:36.800879   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:37.251526   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:37.260658   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:37.304630   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:37.304881   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:37.751271   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:37.758046   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:37.797954   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:37.800328   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:38.251162   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:38.258429   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:38.297524   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:38.300386   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:38.752755   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:38.758989   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:38.797163   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:38.800149   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:39.251109   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:39.257602   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:39.297791   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:39.300169   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:39.751263   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:39.757901   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:39.797454   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:39.798721   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:40.251228   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:40.261909   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:40.300493   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:40.300699   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:40.751302   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:40.761298   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:40.797321   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:40.798468   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:41.251521   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:41.263996   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:41.298298   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:41.300270   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:41.751745   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:41.764737   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:41.797451   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:41.799675   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:42.251408   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:42.259193   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:42.297977   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:42.304050   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:42.751187   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:42.758257   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:42.797562   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:42.807459   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:43.250916   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:43.259221   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:43.297459   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:43.299596   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:43.750817   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:43.761029   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:43.799000   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:43.799066   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:44.251135   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:44.258704   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:44.299006   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:44.299086   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:44.750327   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:44.758248   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:44.797967   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:44.800002   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:45.250557   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:45.258064   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:45.298860   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:45.299654   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:45.751219   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:45.758124   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:45.798370   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:45.798932   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:46.252029   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:46.258806   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:46.297145   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:46.300690   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:46.751268   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:46.757851   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:46.805402   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:46.806737   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:47.251439   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:47.258208   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:47.297444   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:47.299881   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:47.751870   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:47.759221   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:47.797544   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:47.799709   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:48.251071   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:48.258353   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:48.303375   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:48.305920   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:48.751827   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:48.759582   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:48.798619   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:48.799679   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:49.251198   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:49.259409   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:49.299548   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:49.301012   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:49.755469   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:49.761322   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:49.797483   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:49.798720   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:50.251641   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:50.258085   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:50.299914   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:50.299992   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:50.751760   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:50.758264   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:50.799520   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:50.802665   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:51.251538   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:51.263946   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:51.297675   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:51.298695   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:51.753049   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:51.761455   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:51.797861   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:51.799508   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:52.251559   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:52.257749   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:52.299927   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:52.308948   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:52.751296   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:52.759782   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:52.798902   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:52.798910   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:53.250749   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:53.258689   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:53.297244   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:53.299761   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:53.751892   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:53.759152   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:53.798507   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:53.803716   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:54.250975   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:54.257630   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:54.300665   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:54.300782   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:54.751392   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:54.758392   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:54.798282   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:54.800575   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:55.251324   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:55.257476   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:55.300423   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:55.300764   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:55.751775   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:55.763339   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:55.798034   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:55.799675   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:56.251303   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:56.258338   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:56.298365   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:56.298393   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:56.752173   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:56.761943   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:56.797699   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:56.804969   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:57.252199   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:57.257182   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:57.298155   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:57.303012   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:57.750666   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:57.758383   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:57.798762   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:57.799070   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:58.251655   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:58.258124   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:58.298880   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:58.299905   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:58.751840   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:58.766025   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:58.797739   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:58.801680   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:59.250989   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:59.259445   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:59.298158   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:59.314169   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:59.752736   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:59.757143   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:59.798136   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:59.805716   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:00.250990   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:00.258280   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:00.301503   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:00.302277   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:00.753021   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:00.759186   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:00.799743   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:00.801834   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:01.251155   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:01.259226   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:01.299550   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:01.299707   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:01.751712   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:01.764358   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:01.799014   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:01.799754   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:02.251177   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:02.258254   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:02.298193   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:02.299894   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:02.752958   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:02.758480   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:02.801194   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:02.812069   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:03.251464   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:03.258517   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:03.299302   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:03.299873   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:03.751050   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:03.765175   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:03.798038   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:03.799223   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:04.251728   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:04.258823   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:04.298274   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:04.300902   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:04.752103   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:04.758199   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:04.799781   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:04.801007   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:05.251681   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:05.258298   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:05.297918   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:05.300154   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:05.751029   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:05.761342   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:05.797917   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:05.799735   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:06.251578   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:06.258433   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:06.298829   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:06.301452   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:06.750936   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:06.757849   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:06.801027   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:06.803421   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:07.250944   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:07.258624   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:07.297089   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:07.299015   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:07.750416   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:07.758764   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:07.798767   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:07.805978   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:08.253277   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:08.259792   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:08.298214   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:08.298678   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:08.751546   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:08.757982   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:08.798634   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:08.799169   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:09.250986   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:09.257529   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:09.299385   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:09.300953   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:09.751385   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:09.758185   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:09.799208   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:09.799616   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:10.251445   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:10.257823   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:10.298028   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:10.298342   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:10.750909   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:10.761964   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:10.798381   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:10.800410   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:11.252543   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:11.258781   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:11.299932   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:11.306132   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:11.750600   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:11.758905   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:11.797691   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:11.802223   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:12.253640   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:12.260688   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:12.297104   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:12.301292   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:12.750611   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:12.758383   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:12.801802   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:12.803665   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:13.250966   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:13.258244   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:13.298662   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:13.300326   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:13.751050   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:13.758361   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:13.798608   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:13.800389   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:14.252367   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:14.267261   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:14.302906   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:14.303394   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:14.798046   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:14.798265   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:14.802663   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:14.802706   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:15.251221   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:15.257621   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:15.298483   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:15.300370   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:15.751142   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:15.757914   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:15.812832   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:15.816012   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:16.251522   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:16.258129   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:16.298113   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:16.298212   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:16.750663   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:16.775589   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:16.797885   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:16.799887   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:17.251144   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:17.257909   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:17.298004   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:17.300266   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:17.750661   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:17.758816   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:17.797701   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:17.799348   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:18.250691   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:18.258501   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:18.297446   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:18.300071   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:18.750497   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:18.757679   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:18.796765   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:18.798809   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:19.251794   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:19.259235   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:19.297506   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:19.298969   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:19.853554   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:19.853878   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:19.854038   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:19.855319   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:20.252209   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:20.257756   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:20.299011   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:20.299063   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:20.751382   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:20.757832   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:20.806441   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:20.806452   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:21.250726   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:21.258060   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:21.298943   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:21.300858   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:21.752081   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:21.757735   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:21.797344   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:21.799600   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:22.251859   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:22.259630   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:22.298901   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:22.304069   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:22.751603   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:22.761980   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:22.797775   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:22.798953   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:23.251760   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:23.258171   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:23.299201   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:23.300634   15606 kapi.go:107] duration metric: took 55.506982184s to wait for kubernetes.io/minikube-addons=registry ...
	I0422 10:40:23.753660   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:23.762677   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:23.798049   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:24.251854   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:24.259414   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:24.298371   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:24.751919   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:24.758375   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:24.798104   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:25.251811   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:25.259688   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:25.298302   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:25.751683   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:25.759321   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:25.798207   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:26.252940   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:26.259705   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:26.301105   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:26.751562   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:26.759990   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:26.798134   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:27.252558   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:27.261576   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:27.296830   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:27.753534   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:27.758909   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:27.797134   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:28.252049   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:28.260236   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:28.299520   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:28.751892   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:28.760373   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:28.797051   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:29.250986   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:29.261839   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:29.298023   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:29.751572   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:29.760416   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:29.801131   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:30.252023   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:30.259332   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:30.298473   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:30.752049   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:30.758630   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:30.798145   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:31.251861   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:31.257619   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:31.298754   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:31.752158   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:31.762633   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:31.796985   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:32.430548   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:32.431278   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:32.434600   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:32.751309   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:32.757904   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:32.799199   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:33.254139   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:33.259089   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:33.302753   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:33.751922   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:33.759486   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:33.796719   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:34.251844   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:34.259913   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:34.297216   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:34.751487   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:34.758188   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:34.799094   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:35.252076   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:35.257462   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:35.306254   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:35.750805   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:35.758567   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:35.796693   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:36.250929   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:36.258380   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:36.299296   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:36.752208   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:36.758150   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:36.797833   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:37.251178   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:37.258048   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:37.298653   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:37.924941   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:37.925237   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:37.931565   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:38.251058   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:38.261300   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:38.299242   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:38.763012   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:38.766879   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:38.797456   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:39.254956   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:39.288453   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:39.307320   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:39.771053   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:39.775698   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:39.806650   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:40.251281   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:40.259579   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:40.297788   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:40.752516   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:40.759435   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:40.798584   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:41.252858   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:41.273873   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:41.297825   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:41.751233   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:41.757940   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:41.796850   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:42.251375   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:42.257975   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:42.300288   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:42.751486   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:42.758026   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:42.797822   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:43.258102   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:43.263026   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:43.298426   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:43.754545   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:43.762931   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:43.800726   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:44.250868   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:44.258552   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:44.297608   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:44.751134   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:44.758548   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:44.797161   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:45.251529   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:45.259102   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:45.297321   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:45.754959   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:45.782860   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:45.797636   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:46.251351   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:46.258114   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:46.297497   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:46.756020   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:46.759590   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:46.801551   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:47.534483   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:47.534708   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:47.535110   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:47.751724   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:47.762425   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:47.797209   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:48.251760   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:48.258445   15606 kapi.go:107] duration metric: took 1m18.005855744s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0422 10:40:48.301698   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:48.751218   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:48.798612   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:49.251264   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:49.298955   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:49.751668   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:49.802779   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:50.251195   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:50.297484   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:50.751843   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:50.799236   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:51.250573   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:51.297974   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:51.751962   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:51.796947   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:52.250884   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:52.298851   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:52.751207   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:52.797286   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:53.251647   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:53.298467   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:53.750890   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:53.798417   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:54.251631   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:54.298183   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:54.750683   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:54.798028   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:55.251792   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:55.298902   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:55.751465   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:55.798679   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:56.250737   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:56.300354   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:56.751588   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:56.797380   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:57.250484   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:57.297658   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:57.751703   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:57.798010   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:58.295574   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:58.303613   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:58.750629   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:58.797817   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:59.251117   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:59.298418   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:59.750592   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:59.797730   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:00.251575   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:00.300993   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:00.751855   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:00.798247   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:01.250654   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:01.297955   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:01.751670   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:01.797413   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:02.250616   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:02.298094   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:02.752564   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:02.798887   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:03.252402   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:03.298411   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:03.750773   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:03.798066   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:04.251821   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:04.297927   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:04.751137   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:04.797132   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:05.251887   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:05.298236   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:05.751184   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:05.797400   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:06.251202   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:06.297780   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:06.751764   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:06.798132   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:07.252070   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:07.297125   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:07.752050   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:07.797336   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:08.251123   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:08.297588   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:08.755796   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:08.798633   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:09.251016   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:09.298406   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:09.751102   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:09.798069   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:10.251876   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:10.298188   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:10.751620   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:10.798800   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:11.251034   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:11.297051   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:11.751410   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:11.798269   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:12.251431   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:12.298630   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:12.751250   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:12.798501   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:13.251631   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:13.298223   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:13.750380   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:13.797923   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:14.251731   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:14.298277   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:14.750823   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:14.798099   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:15.251884   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:15.300452   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:15.750855   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:15.798440   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:16.251296   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:16.297527   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:16.752451   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:16.798551   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:17.250864   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:17.298478   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:17.750939   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:17.799633   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:18.250835   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:18.297891   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:18.751241   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:18.797445   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:19.250367   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:19.298783   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:19.751053   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:19.799373   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:20.251006   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:20.298159   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:20.751223   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:20.797350   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:21.250591   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:21.299166   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:21.751771   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:21.797746   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:22.251097   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:22.298013   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:22.750708   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:22.797828   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:23.251683   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:23.297817   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:23.752014   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:23.799643   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:24.251123   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:24.297842   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:24.752707   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:24.797899   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:25.251909   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:25.298492   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:25.750630   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:25.799055   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:26.251965   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:26.299100   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:26.751733   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:26.803501   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:27.251221   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:27.297546   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:27.750260   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:27.799089   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:28.251648   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:28.297612   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:28.750560   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:28.797944   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:29.250924   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:29.301473   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:29.751510   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:29.797721   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:30.251385   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:30.298764   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:30.751068   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:30.799710   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:31.250821   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:31.298364   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:31.750579   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:31.797897   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:32.252247   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:32.297545   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:32.750749   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:32.797648   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:33.252225   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:33.297169   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:33.751792   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:33.798045   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:34.251913   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:34.298671   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:34.750682   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:34.797741   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:35.251434   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:35.298521   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:35.750556   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:35.797659   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:36.251429   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:36.297586   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:36.751156   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:36.797627   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:37.251265   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:37.297466   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:37.750859   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:37.798212   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:38.251828   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:38.298315   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:38.750913   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:38.798410   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:39.251586   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:39.298998   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:39.750993   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:39.798799   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:40.251181   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:40.298263   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:40.750176   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:40.799171   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:41.250615   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:41.298160   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:41.751472   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:41.798586   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:42.250952   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:42.297870   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:42.751007   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:42.796857   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:43.251099   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:43.297604   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:43.750936   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:43.798524   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:44.250936   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:44.300719   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:44.751303   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:44.797720   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:45.251005   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:45.297358   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:45.752199   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:45.797628   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:46.251149   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:46.298631   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:46.751237   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:46.797269   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:47.251230   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:47.297670   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:47.752174   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:47.797156   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:48.250652   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:48.297732   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:48.751025   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:48.797948   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:49.251697   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:49.297757   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:49.751068   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:49.797897   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:50.251703   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:50.298681   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:50.754086   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:50.797169   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:51.251485   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:51.298835   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:51.750600   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:51.800432   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:52.254810   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:52.298168   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:52.751249   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:52.798498   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:53.251632   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:53.297710   15606 kapi.go:107] duration metric: took 2m25.507425854s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0422 10:41:53.750790   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:54.250596   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:54.752331   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:55.251034   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:55.750688   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:56.251796   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:56.755417   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:57.251390   15606 kapi.go:107] duration metric: took 2m25.004403033s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0422 10:41:57.253407   15606 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-649657 cluster.
	I0422 10:41:57.254932   15606 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0422 10:41:57.256417   15606 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0422 10:41:57.257848   15606 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, helm-tiller, yakd, inspektor-gadget, metrics-server, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0422 10:41:57.259177   15606 addons.go:505] duration metric: took 2m39.412582042s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner ingress-dns helm-tiller yakd inspektor-gadget metrics-server default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0422 10:41:57.259217   15606 start.go:245] waiting for cluster config update ...
	I0422 10:41:57.259238   15606 start.go:254] writing updated cluster config ...
	I0422 10:41:57.259503   15606 ssh_runner.go:195] Run: rm -f paused
	I0422 10:41:57.312602   15606 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0422 10:41:57.314485   15606 out.go:177] * Done! kubectl is now configured to use "addons-649657" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 22 10:45:08 addons-649657 crio[680]: time="2024-04-22 10:45:08.327353963Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713782708327322564,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579877,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b6035546-f80b-4f6d-9775-2961ccfaacd0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 10:45:08 addons-649657 crio[680]: time="2024-04-22 10:45:08.328190204Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3141a53a-f540-43df-8773-ea7ab9b674ba name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 10:45:08 addons-649657 crio[680]: time="2024-04-22 10:45:08.328274936Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3141a53a-f540-43df-8773-ea7ab9b674ba name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 10:45:08 addons-649657 crio[680]: time="2024-04-22 10:45:08.328583970Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:da39809ea20386d490cfaf8219db59f46bbdddd9c3b9ef9efdb5ff5f38a11628,PodSandboxId:b17e14cc83602824420c9600bcfc007ad47de22842fe4f413090080028e485b4,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1713782702132634183,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-wrvcz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 687a0353-5ece-41c2-8d6f-fe72342f0226,},Annotations:map[string]string{io.kubernetes.container.hash: 3544bfa3,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7239d5ee0e6269bfa36420836c8ed9f52cab4eefffadf575e3b318f44f571dc2,PodSandboxId:c8d2c78f493f4470c16b4971a5ca931f9f6440b4f519e820e25f57bc352316d5,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1713782565768368210,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-bb5x7,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: e60be124-1cbb-461a-b07a-c7ad8934897d,},Annota
tions:map[string]string{io.kubernetes.container.hash: dc4e7fa6,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13c00009bfd8e12ddd3cd3e442734252bf3b33de37d4221b1c728eb6cf1260a7,PodSandboxId:13b8c3288617978a6bd4f51de5a0b637795ded04d7701c43becda5ec0be110a1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:542a383900f6fdcc90e1b7341f6889145d43f839f35f608b4de7821a77ca54d9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:11d76b979f02dc27a70e18a7d6de3451ce604f88dba049d4aa2b95225bb4c9ba,State:CONTAINER_RUNNING,CreatedAt:1713782559252314783,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 4635a414-2076-41e6-b935-fd98104af18f,},Annotations:map[string]string{io.kubernetes.container.hash: 59547dc7,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98ce745d0650b4b452c0bafed8251ffb60086e77a34f7a677a87af3eb5451dd6,PodSandboxId:0089ee81a28640fa1a90a60bb9e1b3c80c2d23f75a963dc2ca2af0c5fc3aaa10,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1713782516590170356,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-9bc6d,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: aafecea0-aca4-4896-8e41-40e809b7f5e6,},Annotations:map[string]string{io.kubernetes.container.hash: d6b5e118,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82837449f00a1cb15cf7b006d0101df1980e30f5ef698f0292f8a651cbd753c2,PodSandboxId:e87eadec82e10c1701c89969856aaafbd4070e52efd8817a92d9d74699dd7a5b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,Stat
e:CONTAINER_RUNNING,CreatedAt:1713782425464910741,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-s7lgz,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3a83e43e-cd63-407e-aab4-be83ab5f77f8,},Annotations:map[string]string{io.kubernetes.container.hash: 300bd0b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b5629db3b79bb4ff9c4eb32fbca70aee4c1d8b18df6f187a73b65df7032c571,PodSandboxId:a582669af38b5231f48d12ecc1fa1a647cccb1677168e44b30e7bc8fb3805fe0,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f
75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1713782415095134592,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-rz9f2,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 5d5608ee-50a3-46d3-9363-9bef97083ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 27ac0260,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:747bedb41a6513d6f0ed3d498d50de4156ef98e6ab9e372c254f10629802adfe,PodSandboxId:b711b3fb32b9e1e207139b942bfc0a0e37a216f73e5ab53dbf22fbde2e1de3b4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e41
2e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1713782398948427594,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-phnbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce74ad1e-3a35-470e-962e-901dcdc84a6d,},Annotations:map[string]string{io.kubernetes.container.hash: b13acbe3,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dca8963cd05ebf8e5c2ab895728ad12bcce57a49e34622e946bd3d0130d46b17,PodSandboxId:8f9a4ee47413b897023b0adcccede17a3cdcd71a8350a3303689eafcd2eabf67,Metadata:&ContainerMetadata{Name:storage-pr
ovisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713782365309632220,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7923bd-3f6b-44d8-846c-ed7eee65a6df,},Annotations:map[string]string{io.kubernetes.container.hash: e52f1f3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e399e5bad2ea7afff9b2984de13eba02623820f9d265c24111fb4f7ca6de5c,PodSandboxId:8fada0962ee40a9c874c76d55e10b0575be2ba864816e8a92688313389381590,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Im
age:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713782363333380774,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2mxqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2ffe62-c568-4ca9-b23a-2976185dc0c0,},Annotations:map[string]string{io.kubernetes.container.hash: a67ae4fb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc4
ae3f334eb8f15a02ce1cb74c938edda287420283c1625060ec6de34223cfc,PodSandboxId:62aa08616b67da6632a53210cfbbdcef6c311a35aae53ae9364e167f48faf281,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713782360160734997,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hlgg9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 478bfbcb-c8d1-4a0b-b13c-84e8892d1d3e,},Annotations:map[string]string{io.kubernetes.container.hash: 9854acf8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:228b3107896998ed67a5c41465c19156bb68
c40d0b7d32997369f4ceea0e9199,PodSandboxId:75be32aa73a6fdf9bbf430ca63dcb63c2f8f13d58e9d91b7f9206327239a5f46,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713782339938654563,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-649657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ee34a8718d450cdc971ff15e6bcf368,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a61e188d41b17b7cae7258b2e08215974cb51d7f7cb89893a9e
4eb40fc5a3d,PodSandboxId:50ca3ae500a2e0a6107d981b0924c139233deff63e724a3a01b355cb298b8b17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713782339926609237,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-649657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98542d021c1579a6297e229b3c72ace,},Annotations:map[string]string{io.kubernetes.container.hash: f56b21d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e87c8e5071b3101013333015fc0e2d11e262168ef3ae336c3da95c8911871553,PodSa
ndboxId:92a4ef4ade159a5ee065deb5945fd4c857ccacf6e702b5496a97bdc22bcfe791,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713782339846013301,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-649657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7fa9beead7b52a4e887c1dc4431871,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dce2c24d181494f5a032b1ca97445bc0c8ca16e280f781f2fe9667680c6
ff00,PodSandboxId:701814d87562835b2262a0ad5c2424dca08ae4bc77de5e34afcd4ebc6da23a1f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713782339798738126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-649657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1b0f2f70abeaca245aa3f96738d8202,},Annotations:map[string]string{io.kubernetes.container.hash: 78f47633,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3141a53a-f540-43df-8773-ea7ab9b674ba name=/runtime.v1.RuntimeService/Lis
tContainers
	Apr 22 10:45:08 addons-649657 crio[680]: time="2024-04-22 10:45:08.379613181Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2735010d-8a72-4a3e-8f29-283f603687b0 name=/runtime.v1.RuntimeService/Version
	Apr 22 10:45:08 addons-649657 crio[680]: time="2024-04-22 10:45:08.379722572Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2735010d-8a72-4a3e-8f29-283f603687b0 name=/runtime.v1.RuntimeService/Version
	Apr 22 10:45:08 addons-649657 crio[680]: time="2024-04-22 10:45:08.381688780Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=295f8ce8-9a52-49c0-8d2c-0788895452d3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 10:45:08 addons-649657 crio[680]: time="2024-04-22 10:45:08.383060672Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713782708383030793,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579877,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=295f8ce8-9a52-49c0-8d2c-0788895452d3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 10:45:08 addons-649657 crio[680]: time="2024-04-22 10:45:08.384329058Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ba78f050-cae5-4c5d-b333-534be2aeaefa name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 10:45:08 addons-649657 crio[680]: time="2024-04-22 10:45:08.384400106Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ba78f050-cae5-4c5d-b333-534be2aeaefa name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 10:45:08 addons-649657 crio[680]: time="2024-04-22 10:45:08.384685880Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:da39809ea20386d490cfaf8219db59f46bbdddd9c3b9ef9efdb5ff5f38a11628,PodSandboxId:b17e14cc83602824420c9600bcfc007ad47de22842fe4f413090080028e485b4,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1713782702132634183,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-wrvcz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 687a0353-5ece-41c2-8d6f-fe72342f0226,},Annotations:map[string]string{io.kubernetes.container.hash: 3544bfa3,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7239d5ee0e6269bfa36420836c8ed9f52cab4eefffadf575e3b318f44f571dc2,PodSandboxId:c8d2c78f493f4470c16b4971a5ca931f9f6440b4f519e820e25f57bc352316d5,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1713782565768368210,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-bb5x7,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: e60be124-1cbb-461a-b07a-c7ad8934897d,},Annota
tions:map[string]string{io.kubernetes.container.hash: dc4e7fa6,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13c00009bfd8e12ddd3cd3e442734252bf3b33de37d4221b1c728eb6cf1260a7,PodSandboxId:13b8c3288617978a6bd4f51de5a0b637795ded04d7701c43becda5ec0be110a1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:542a383900f6fdcc90e1b7341f6889145d43f839f35f608b4de7821a77ca54d9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:11d76b979f02dc27a70e18a7d6de3451ce604f88dba049d4aa2b95225bb4c9ba,State:CONTAINER_RUNNING,CreatedAt:1713782559252314783,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 4635a414-2076-41e6-b935-fd98104af18f,},Annotations:map[string]string{io.kubernetes.container.hash: 59547dc7,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98ce745d0650b4b452c0bafed8251ffb60086e77a34f7a677a87af3eb5451dd6,PodSandboxId:0089ee81a28640fa1a90a60bb9e1b3c80c2d23f75a963dc2ca2af0c5fc3aaa10,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1713782516590170356,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-9bc6d,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: aafecea0-aca4-4896-8e41-40e809b7f5e6,},Annotations:map[string]string{io.kubernetes.container.hash: d6b5e118,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82837449f00a1cb15cf7b006d0101df1980e30f5ef698f0292f8a651cbd753c2,PodSandboxId:e87eadec82e10c1701c89969856aaafbd4070e52efd8817a92d9d74699dd7a5b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,Stat
e:CONTAINER_RUNNING,CreatedAt:1713782425464910741,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-s7lgz,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3a83e43e-cd63-407e-aab4-be83ab5f77f8,},Annotations:map[string]string{io.kubernetes.container.hash: 300bd0b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b5629db3b79bb4ff9c4eb32fbca70aee4c1d8b18df6f187a73b65df7032c571,PodSandboxId:a582669af38b5231f48d12ecc1fa1a647cccb1677168e44b30e7bc8fb3805fe0,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f
75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1713782415095134592,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-rz9f2,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 5d5608ee-50a3-46d3-9363-9bef97083ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 27ac0260,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:747bedb41a6513d6f0ed3d498d50de4156ef98e6ab9e372c254f10629802adfe,PodSandboxId:b711b3fb32b9e1e207139b942bfc0a0e37a216f73e5ab53dbf22fbde2e1de3b4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e41
2e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1713782398948427594,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-phnbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce74ad1e-3a35-470e-962e-901dcdc84a6d,},Annotations:map[string]string{io.kubernetes.container.hash: b13acbe3,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dca8963cd05ebf8e5c2ab895728ad12bcce57a49e34622e946bd3d0130d46b17,PodSandboxId:8f9a4ee47413b897023b0adcccede17a3cdcd71a8350a3303689eafcd2eabf67,Metadata:&ContainerMetadata{Name:storage-pr
ovisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713782365309632220,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7923bd-3f6b-44d8-846c-ed7eee65a6df,},Annotations:map[string]string{io.kubernetes.container.hash: e52f1f3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e399e5bad2ea7afff9b2984de13eba02623820f9d265c24111fb4f7ca6de5c,PodSandboxId:8fada0962ee40a9c874c76d55e10b0575be2ba864816e8a92688313389381590,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Im
age:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713782363333380774,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2mxqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2ffe62-c568-4ca9-b23a-2976185dc0c0,},Annotations:map[string]string{io.kubernetes.container.hash: a67ae4fb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc4
ae3f334eb8f15a02ce1cb74c938edda287420283c1625060ec6de34223cfc,PodSandboxId:62aa08616b67da6632a53210cfbbdcef6c311a35aae53ae9364e167f48faf281,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713782360160734997,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hlgg9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 478bfbcb-c8d1-4a0b-b13c-84e8892d1d3e,},Annotations:map[string]string{io.kubernetes.container.hash: 9854acf8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:228b3107896998ed67a5c41465c19156bb68
c40d0b7d32997369f4ceea0e9199,PodSandboxId:75be32aa73a6fdf9bbf430ca63dcb63c2f8f13d58e9d91b7f9206327239a5f46,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713782339938654563,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-649657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ee34a8718d450cdc971ff15e6bcf368,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a61e188d41b17b7cae7258b2e08215974cb51d7f7cb89893a9e
4eb40fc5a3d,PodSandboxId:50ca3ae500a2e0a6107d981b0924c139233deff63e724a3a01b355cb298b8b17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713782339926609237,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-649657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98542d021c1579a6297e229b3c72ace,},Annotations:map[string]string{io.kubernetes.container.hash: f56b21d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e87c8e5071b3101013333015fc0e2d11e262168ef3ae336c3da95c8911871553,PodSa
ndboxId:92a4ef4ade159a5ee065deb5945fd4c857ccacf6e702b5496a97bdc22bcfe791,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713782339846013301,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-649657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7fa9beead7b52a4e887c1dc4431871,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dce2c24d181494f5a032b1ca97445bc0c8ca16e280f781f2fe9667680c6
ff00,PodSandboxId:701814d87562835b2262a0ad5c2424dca08ae4bc77de5e34afcd4ebc6da23a1f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713782339798738126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-649657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1b0f2f70abeaca245aa3f96738d8202,},Annotations:map[string]string{io.kubernetes.container.hash: 78f47633,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ba78f050-cae5-4c5d-b333-534be2aeaefa name=/runtime.v1.RuntimeService/Lis
tContainers
	Apr 22 10:45:08 addons-649657 crio[680]: time="2024-04-22 10:45:08.428301838Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a125ed48-e1e9-4356-a93a-b5317499b597 name=/runtime.v1.RuntimeService/Version
	Apr 22 10:45:08 addons-649657 crio[680]: time="2024-04-22 10:45:08.428413841Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a125ed48-e1e9-4356-a93a-b5317499b597 name=/runtime.v1.RuntimeService/Version
	Apr 22 10:45:08 addons-649657 crio[680]: time="2024-04-22 10:45:08.430002829Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=768edb3a-0776-4dec-90da-7d0588f1ca34 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 10:45:08 addons-649657 crio[680]: time="2024-04-22 10:45:08.431623626Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713782708431593865,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579877,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=768edb3a-0776-4dec-90da-7d0588f1ca34 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 10:45:08 addons-649657 crio[680]: time="2024-04-22 10:45:08.432627829Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cffad143-1e3e-489d-9624-a3031d355245 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 10:45:08 addons-649657 crio[680]: time="2024-04-22 10:45:08.432680773Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cffad143-1e3e-489d-9624-a3031d355245 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 10:45:08 addons-649657 crio[680]: time="2024-04-22 10:45:08.433228208Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:da39809ea20386d490cfaf8219db59f46bbdddd9c3b9ef9efdb5ff5f38a11628,PodSandboxId:b17e14cc83602824420c9600bcfc007ad47de22842fe4f413090080028e485b4,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1713782702132634183,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-wrvcz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 687a0353-5ece-41c2-8d6f-fe72342f0226,},Annotations:map[string]string{io.kubernetes.container.hash: 3544bfa3,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7239d5ee0e6269bfa36420836c8ed9f52cab4eefffadf575e3b318f44f571dc2,PodSandboxId:c8d2c78f493f4470c16b4971a5ca931f9f6440b4f519e820e25f57bc352316d5,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1713782565768368210,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-bb5x7,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: e60be124-1cbb-461a-b07a-c7ad8934897d,},Annota
tions:map[string]string{io.kubernetes.container.hash: dc4e7fa6,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13c00009bfd8e12ddd3cd3e442734252bf3b33de37d4221b1c728eb6cf1260a7,PodSandboxId:13b8c3288617978a6bd4f51de5a0b637795ded04d7701c43becda5ec0be110a1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:542a383900f6fdcc90e1b7341f6889145d43f839f35f608b4de7821a77ca54d9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:11d76b979f02dc27a70e18a7d6de3451ce604f88dba049d4aa2b95225bb4c9ba,State:CONTAINER_RUNNING,CreatedAt:1713782559252314783,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 4635a414-2076-41e6-b935-fd98104af18f,},Annotations:map[string]string{io.kubernetes.container.hash: 59547dc7,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98ce745d0650b4b452c0bafed8251ffb60086e77a34f7a677a87af3eb5451dd6,PodSandboxId:0089ee81a28640fa1a90a60bb9e1b3c80c2d23f75a963dc2ca2af0c5fc3aaa10,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1713782516590170356,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-9bc6d,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: aafecea0-aca4-4896-8e41-40e809b7f5e6,},Annotations:map[string]string{io.kubernetes.container.hash: d6b5e118,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82837449f00a1cb15cf7b006d0101df1980e30f5ef698f0292f8a651cbd753c2,PodSandboxId:e87eadec82e10c1701c89969856aaafbd4070e52efd8817a92d9d74699dd7a5b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,Stat
e:CONTAINER_RUNNING,CreatedAt:1713782425464910741,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-s7lgz,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3a83e43e-cd63-407e-aab4-be83ab5f77f8,},Annotations:map[string]string{io.kubernetes.container.hash: 300bd0b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b5629db3b79bb4ff9c4eb32fbca70aee4c1d8b18df6f187a73b65df7032c571,PodSandboxId:a582669af38b5231f48d12ecc1fa1a647cccb1677168e44b30e7bc8fb3805fe0,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f
75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1713782415095134592,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-rz9f2,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 5d5608ee-50a3-46d3-9363-9bef97083ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 27ac0260,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:747bedb41a6513d6f0ed3d498d50de4156ef98e6ab9e372c254f10629802adfe,PodSandboxId:b711b3fb32b9e1e207139b942bfc0a0e37a216f73e5ab53dbf22fbde2e1de3b4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e41
2e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1713782398948427594,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-phnbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce74ad1e-3a35-470e-962e-901dcdc84a6d,},Annotations:map[string]string{io.kubernetes.container.hash: b13acbe3,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dca8963cd05ebf8e5c2ab895728ad12bcce57a49e34622e946bd3d0130d46b17,PodSandboxId:8f9a4ee47413b897023b0adcccede17a3cdcd71a8350a3303689eafcd2eabf67,Metadata:&ContainerMetadata{Name:storage-pr
ovisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713782365309632220,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7923bd-3f6b-44d8-846c-ed7eee65a6df,},Annotations:map[string]string{io.kubernetes.container.hash: e52f1f3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e399e5bad2ea7afff9b2984de13eba02623820f9d265c24111fb4f7ca6de5c,PodSandboxId:8fada0962ee40a9c874c76d55e10b0575be2ba864816e8a92688313389381590,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Im
age:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713782363333380774,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2mxqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2ffe62-c568-4ca9-b23a-2976185dc0c0,},Annotations:map[string]string{io.kubernetes.container.hash: a67ae4fb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc4
ae3f334eb8f15a02ce1cb74c938edda287420283c1625060ec6de34223cfc,PodSandboxId:62aa08616b67da6632a53210cfbbdcef6c311a35aae53ae9364e167f48faf281,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713782360160734997,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hlgg9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 478bfbcb-c8d1-4a0b-b13c-84e8892d1d3e,},Annotations:map[string]string{io.kubernetes.container.hash: 9854acf8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:228b3107896998ed67a5c41465c19156bb68
c40d0b7d32997369f4ceea0e9199,PodSandboxId:75be32aa73a6fdf9bbf430ca63dcb63c2f8f13d58e9d91b7f9206327239a5f46,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713782339938654563,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-649657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ee34a8718d450cdc971ff15e6bcf368,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a61e188d41b17b7cae7258b2e08215974cb51d7f7cb89893a9e
4eb40fc5a3d,PodSandboxId:50ca3ae500a2e0a6107d981b0924c139233deff63e724a3a01b355cb298b8b17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713782339926609237,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-649657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98542d021c1579a6297e229b3c72ace,},Annotations:map[string]string{io.kubernetes.container.hash: f56b21d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e87c8e5071b3101013333015fc0e2d11e262168ef3ae336c3da95c8911871553,PodSa
ndboxId:92a4ef4ade159a5ee065deb5945fd4c857ccacf6e702b5496a97bdc22bcfe791,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713782339846013301,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-649657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7fa9beead7b52a4e887c1dc4431871,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dce2c24d181494f5a032b1ca97445bc0c8ca16e280f781f2fe9667680c6
ff00,PodSandboxId:701814d87562835b2262a0ad5c2424dca08ae4bc77de5e34afcd4ebc6da23a1f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713782339798738126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-649657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1b0f2f70abeaca245aa3f96738d8202,},Annotations:map[string]string{io.kubernetes.container.hash: 78f47633,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cffad143-1e3e-489d-9624-a3031d355245 name=/runtime.v1.RuntimeService/Lis
tContainers
	Apr 22 10:45:08 addons-649657 crio[680]: time="2024-04-22 10:45:08.479192820Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1d1f611e-58be-4939-a8c7-f4f5349b0697 name=/runtime.v1.RuntimeService/Version
	Apr 22 10:45:08 addons-649657 crio[680]: time="2024-04-22 10:45:08.479263596Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1d1f611e-58be-4939-a8c7-f4f5349b0697 name=/runtime.v1.RuntimeService/Version
	Apr 22 10:45:08 addons-649657 crio[680]: time="2024-04-22 10:45:08.480574735Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bcbdd57a-cd32-426a-871a-d92cd79abc37 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 10:45:08 addons-649657 crio[680]: time="2024-04-22 10:45:08.482748444Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713782708482721218,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579877,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bcbdd57a-cd32-426a-871a-d92cd79abc37 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 10:45:08 addons-649657 crio[680]: time="2024-04-22 10:45:08.483669394Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a6353e1-0d07-46ca-8d9d-e19fde98157c name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 10:45:08 addons-649657 crio[680]: time="2024-04-22 10:45:08.483727402Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a6353e1-0d07-46ca-8d9d-e19fde98157c name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 10:45:08 addons-649657 crio[680]: time="2024-04-22 10:45:08.484069617Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:da39809ea20386d490cfaf8219db59f46bbdddd9c3b9ef9efdb5ff5f38a11628,PodSandboxId:b17e14cc83602824420c9600bcfc007ad47de22842fe4f413090080028e485b4,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1713782702132634183,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-wrvcz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 687a0353-5ece-41c2-8d6f-fe72342f0226,},Annotations:map[string]string{io.kubernetes.container.hash: 3544bfa3,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7239d5ee0e6269bfa36420836c8ed9f52cab4eefffadf575e3b318f44f571dc2,PodSandboxId:c8d2c78f493f4470c16b4971a5ca931f9f6440b4f519e820e25f57bc352316d5,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1713782565768368210,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-bb5x7,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: e60be124-1cbb-461a-b07a-c7ad8934897d,},Annota
tions:map[string]string{io.kubernetes.container.hash: dc4e7fa6,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13c00009bfd8e12ddd3cd3e442734252bf3b33de37d4221b1c728eb6cf1260a7,PodSandboxId:13b8c3288617978a6bd4f51de5a0b637795ded04d7701c43becda5ec0be110a1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:542a383900f6fdcc90e1b7341f6889145d43f839f35f608b4de7821a77ca54d9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:11d76b979f02dc27a70e18a7d6de3451ce604f88dba049d4aa2b95225bb4c9ba,State:CONTAINER_RUNNING,CreatedAt:1713782559252314783,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 4635a414-2076-41e6-b935-fd98104af18f,},Annotations:map[string]string{io.kubernetes.container.hash: 59547dc7,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98ce745d0650b4b452c0bafed8251ffb60086e77a34f7a677a87af3eb5451dd6,PodSandboxId:0089ee81a28640fa1a90a60bb9e1b3c80c2d23f75a963dc2ca2af0c5fc3aaa10,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1713782516590170356,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-9bc6d,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: aafecea0-aca4-4896-8e41-40e809b7f5e6,},Annotations:map[string]string{io.kubernetes.container.hash: d6b5e118,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82837449f00a1cb15cf7b006d0101df1980e30f5ef698f0292f8a651cbd753c2,PodSandboxId:e87eadec82e10c1701c89969856aaafbd4070e52efd8817a92d9d74699dd7a5b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,Stat
e:CONTAINER_RUNNING,CreatedAt:1713782425464910741,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-s7lgz,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3a83e43e-cd63-407e-aab4-be83ab5f77f8,},Annotations:map[string]string{io.kubernetes.container.hash: 300bd0b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b5629db3b79bb4ff9c4eb32fbca70aee4c1d8b18df6f187a73b65df7032c571,PodSandboxId:a582669af38b5231f48d12ecc1fa1a647cccb1677168e44b30e7bc8fb3805fe0,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f
75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1713782415095134592,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-rz9f2,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 5d5608ee-50a3-46d3-9363-9bef97083ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 27ac0260,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:747bedb41a6513d6f0ed3d498d50de4156ef98e6ab9e372c254f10629802adfe,PodSandboxId:b711b3fb32b9e1e207139b942bfc0a0e37a216f73e5ab53dbf22fbde2e1de3b4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e41
2e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1713782398948427594,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-phnbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce74ad1e-3a35-470e-962e-901dcdc84a6d,},Annotations:map[string]string{io.kubernetes.container.hash: b13acbe3,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dca8963cd05ebf8e5c2ab895728ad12bcce57a49e34622e946bd3d0130d46b17,PodSandboxId:8f9a4ee47413b897023b0adcccede17a3cdcd71a8350a3303689eafcd2eabf67,Metadata:&ContainerMetadata{Name:storage-pr
ovisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713782365309632220,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7923bd-3f6b-44d8-846c-ed7eee65a6df,},Annotations:map[string]string{io.kubernetes.container.hash: e52f1f3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e399e5bad2ea7afff9b2984de13eba02623820f9d265c24111fb4f7ca6de5c,PodSandboxId:8fada0962ee40a9c874c76d55e10b0575be2ba864816e8a92688313389381590,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Im
age:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713782363333380774,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2mxqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2ffe62-c568-4ca9-b23a-2976185dc0c0,},Annotations:map[string]string{io.kubernetes.container.hash: a67ae4fb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc4
ae3f334eb8f15a02ce1cb74c938edda287420283c1625060ec6de34223cfc,PodSandboxId:62aa08616b67da6632a53210cfbbdcef6c311a35aae53ae9364e167f48faf281,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713782360160734997,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hlgg9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 478bfbcb-c8d1-4a0b-b13c-84e8892d1d3e,},Annotations:map[string]string{io.kubernetes.container.hash: 9854acf8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:228b3107896998ed67a5c41465c19156bb68
c40d0b7d32997369f4ceea0e9199,PodSandboxId:75be32aa73a6fdf9bbf430ca63dcb63c2f8f13d58e9d91b7f9206327239a5f46,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713782339938654563,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-649657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ee34a8718d450cdc971ff15e6bcf368,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a61e188d41b17b7cae7258b2e08215974cb51d7f7cb89893a9e
4eb40fc5a3d,PodSandboxId:50ca3ae500a2e0a6107d981b0924c139233deff63e724a3a01b355cb298b8b17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713782339926609237,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-649657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98542d021c1579a6297e229b3c72ace,},Annotations:map[string]string{io.kubernetes.container.hash: f56b21d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e87c8e5071b3101013333015fc0e2d11e262168ef3ae336c3da95c8911871553,PodSa
ndboxId:92a4ef4ade159a5ee065deb5945fd4c857ccacf6e702b5496a97bdc22bcfe791,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713782339846013301,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-649657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7fa9beead7b52a4e887c1dc4431871,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dce2c24d181494f5a032b1ca97445bc0c8ca16e280f781f2fe9667680c6
ff00,PodSandboxId:701814d87562835b2262a0ad5c2424dca08ae4bc77de5e34afcd4ebc6da23a1f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713782339798738126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-649657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1b0f2f70abeaca245aa3f96738d8202,},Annotations:map[string]string{io.kubernetes.container.hash: 78f47633,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0a6353e1-0d07-46ca-8d9d-e19fde98157c name=/runtime.v1.RuntimeService/Lis
tContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	da39809ea2038       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                 6 seconds ago       Running             hello-world-app           0                   b17e14cc83602       hello-world-app-86c47465fc-wrvcz
	7239d5ee0e626       ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f                   2 minutes ago       Running             headlamp                  0                   c8d2c78f493f4       headlamp-7559bf459f-bb5x7
	13c00009bfd8e       docker.io/library/nginx@sha256:542a383900f6fdcc90e1b7341f6889145d43f839f35f608b4de7821a77ca54d9                         2 minutes ago       Running             nginx                     0                   13b8c32886179       nginx
	98ce745d0650b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            3 minutes ago       Running             gcp-auth                  0                   0089ee81a2864       gcp-auth-5db96cd9b4-9bc6d
	82837449f00a1       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        4 minutes ago       Running             local-path-provisioner    0                   e87eadec82e10       local-path-provisioner-8d985888d-s7lgz
	3b5629db3b79b       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                         4 minutes ago       Running             yakd                      0                   a582669af38b5       yakd-dashboard-5ddbf7d777-rz9f2
	747bedb41a651       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   5 minutes ago       Running             metrics-server            0                   b711b3fb32b9e       metrics-server-c59844bb4-phnbq
	dca8963cd05eb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        5 minutes ago       Running             storage-provisioner       0                   8f9a4ee47413b       storage-provisioner
	d8e399e5bad2e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        5 minutes ago       Running             coredns                   0                   8fada0962ee40       coredns-7db6d8ff4d-2mxqp
	cc4ae3f334eb8       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                                        5 minutes ago       Running             kube-proxy                0                   62aa08616b67d       kube-proxy-hlgg9
	228b310789699       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                                        6 minutes ago       Running             kube-scheduler            0                   75be32aa73a6f       kube-scheduler-addons-649657
	49a61e188d41b       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                                        6 minutes ago       Running             kube-apiserver            0                   50ca3ae500a2e       kube-apiserver-addons-649657
	e87c8e5071b31       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                                        6 minutes ago       Running             kube-controller-manager   0                   92a4ef4ade159       kube-controller-manager-addons-649657
	3dce2c24d1814       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        6 minutes ago       Running             etcd                      0                   701814d875628       etcd-addons-649657
	
	
	==> coredns [d8e399e5bad2ea7afff9b2984de13eba02623820f9d265c24111fb4f7ca6de5c] <==
	[INFO] 10.244.0.8:41916 - 38226 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.001142617s
	[INFO] 10.244.0.8:47573 - 53521 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000205944s
	[INFO] 10.244.0.8:47573 - 28703 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000109378s
	[INFO] 10.244.0.8:45327 - 40199 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000156094s
	[INFO] 10.244.0.8:45327 - 14341 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000223029s
	[INFO] 10.244.0.8:48774 - 51542 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00012261s
	[INFO] 10.244.0.8:48774 - 4695 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000153651s
	[INFO] 10.244.0.8:40647 - 10104 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000108597s
	[INFO] 10.244.0.8:40647 - 4733 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000122859s
	[INFO] 10.244.0.8:52147 - 48149 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00004062s
	[INFO] 10.244.0.8:52147 - 46870 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000043653s
	[INFO] 10.244.0.8:37217 - 19858 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000042755s
	[INFO] 10.244.0.8:37217 - 37008 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000029418s
	[INFO] 10.244.0.8:38214 - 52639 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000033957s
	[INFO] 10.244.0.8:38214 - 23697 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000039481s
	[INFO] 10.244.0.22:56933 - 22929 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000507854s
	[INFO] 10.244.0.22:57259 - 47395 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000367276s
	[INFO] 10.244.0.22:41688 - 8569 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000113891s
	[INFO] 10.244.0.22:43352 - 17413 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000100155s
	[INFO] 10.244.0.22:35586 - 62845 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000249472s
	[INFO] 10.244.0.22:57574 - 22388 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000224677s
	[INFO] 10.244.0.22:43388 - 34767 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001462991s
	[INFO] 10.244.0.22:47177 - 64805 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001191611s
	[INFO] 10.244.0.25:37565 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000421805s
	[INFO] 10.244.0.25:44554 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000091614s
	
	
	==> describe nodes <==
	Name:               addons-649657
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-649657
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437
	                    minikube.k8s.io/name=addons-649657
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_22T10_39_06_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-649657
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 10:39:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-649657
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 10:45:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 10:43:10 +0000   Mon, 22 Apr 2024 10:39:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 10:43:10 +0000   Mon, 22 Apr 2024 10:39:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 10:43:10 +0000   Mon, 22 Apr 2024 10:39:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 10:43:10 +0000   Mon, 22 Apr 2024 10:39:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.194
	  Hostname:    addons-649657
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 03fb13485e6f4e5fb50eeba42d90dd5d
	  System UUID:                03fb1348-5e6f-4e5f-b50e-eba42d90dd5d
	  Boot ID:                    df02515d-ac16-46de-9be1-a43fef15fe11
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-wrvcz          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  gcp-auth                    gcp-auth-5db96cd9b4-9bc6d                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m36s
	  headlamp                    headlamp-7559bf459f-bb5x7                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  kube-system                 coredns-7db6d8ff4d-2mxqp                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m50s
	  kube-system                 etcd-addons-649657                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m3s
	  kube-system                 kube-apiserver-addons-649657              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 kube-controller-manager-addons-649657     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	  kube-system                 kube-proxy-hlgg9                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	  kube-system                 kube-scheduler-addons-649657              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 metrics-server-c59844bb4-phnbq            100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         5m44s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m46s
	  local-path-storage          local-path-provisioner-8d985888d-s7lgz    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m44s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-rz9f2           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     5m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m47s                kube-proxy       
	  Normal  NodeHasSufficientMemory  6m9s (x8 over 6m9s)  kubelet          Node addons-649657 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m9s (x8 over 6m9s)  kubelet          Node addons-649657 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m9s (x7 over 6m9s)  kubelet          Node addons-649657 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m3s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m3s                 kubelet          Node addons-649657 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m3s                 kubelet          Node addons-649657 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m3s                 kubelet          Node addons-649657 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m2s                 kubelet          Node addons-649657 status is now: NodeReady
	  Normal  RegisteredNode           5m51s                node-controller  Node addons-649657 event: Registered Node addons-649657 in Controller
	
	
	==> dmesg <==
	[  +5.005001] kauditd_printk_skb: 98 callbacks suppressed
	[  +5.547445] kauditd_printk_skb: 93 callbacks suppressed
	[  +5.339872] kauditd_printk_skb: 104 callbacks suppressed
	[ +15.729026] kauditd_printk_skb: 29 callbacks suppressed
	[Apr22 10:40] kauditd_printk_skb: 4 callbacks suppressed
	[ +10.231293] kauditd_printk_skb: 30 callbacks suppressed
	[ +13.524435] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.601251] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.051333] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.987413] kauditd_printk_skb: 41 callbacks suppressed
	[Apr22 10:41] kauditd_printk_skb: 24 callbacks suppressed
	[ +40.821261] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.649567] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.564957] kauditd_printk_skb: 11 callbacks suppressed
	[Apr22 10:42] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.851118] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.410296] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.232521] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.362981] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.343604] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.522649] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.496562] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.655237] kauditd_printk_skb: 30 callbacks suppressed
	[Apr22 10:44] kauditd_printk_skb: 10 callbacks suppressed
	[Apr22 10:45] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [3dce2c24d181494f5a032b1ca97445bc0c8ca16e280f781f2fe9667680c6ff00] <==
	{"level":"info","ts":"2024-04-22T10:40:37.901021Z","caller":"traceutil/trace.go:171","msg":"trace[669615030] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1100; }","duration":"165.079261ms","start":"2024-04-22T10:40:37.735936Z","end":"2024-04-22T10:40:37.901015Z","steps":["trace[669615030] 'agreement among raft nodes before linearized reading'  (duration: 159.377586ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T10:40:37.897124Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.759375ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14358"}
	{"level":"info","ts":"2024-04-22T10:40:37.901211Z","caller":"traceutil/trace.go:171","msg":"trace[216867311] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1100; }","duration":"124.870842ms","start":"2024-04-22T10:40:37.776333Z","end":"2024-04-22T10:40:37.901203Z","steps":["trace[216867311] 'agreement among raft nodes before linearized reading'  (duration: 120.706175ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T10:40:47.50139Z","caller":"traceutil/trace.go:171","msg":"trace[489243040] linearizableReadLoop","detail":"{readStateIndex:1218; appliedIndex:1217; }","duration":"271.15835ms","start":"2024-04-22T10:40:47.230219Z","end":"2024-04-22T10:40:47.501377Z","steps":["trace[489243040] 'read index received'  (duration: 271.03185ms)","trace[489243040] 'applied index is now lower than readState.Index'  (duration: 125.94µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-22T10:40:47.501684Z","caller":"traceutil/trace.go:171","msg":"trace[1704839840] transaction","detail":"{read_only:false; response_revision:1180; number_of_response:1; }","duration":"312.928147ms","start":"2024-04-22T10:40:47.188746Z","end":"2024-04-22T10:40:47.501674Z","steps":["trace[1704839840] 'process raft request'  (duration: 312.542394ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T10:40:47.501873Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-22T10:40:47.18873Z","time spent":"313.002077ms","remote":"127.0.0.1:50692","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2186,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/snapshot-controller-745499f584\" mod_revision:1061 > success:<request_put:<key:\"/registry/replicasets/kube-system/snapshot-controller-745499f584\" value_size:2114 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/snapshot-controller-745499f584\" > >"}
	{"level":"warn","ts":"2024-04-22T10:40:47.502083Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"271.890046ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-04-22T10:40:47.502133Z","caller":"traceutil/trace.go:171","msg":"trace[100995071] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1180; }","duration":"271.960738ms","start":"2024-04-22T10:40:47.230165Z","end":"2024-04-22T10:40:47.502126Z","steps":["trace[100995071] 'agreement among raft nodes before linearized reading'  (duration: 271.843578ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T10:40:47.502424Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"267.126534ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85565"}
	{"level":"info","ts":"2024-04-22T10:40:47.502475Z","caller":"traceutil/trace.go:171","msg":"trace[1028049129] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1180; }","duration":"267.252649ms","start":"2024-04-22T10:40:47.235216Z","end":"2024-04-22T10:40:47.502468Z","steps":["trace[1028049129] 'agreement among raft nodes before linearized reading'  (duration: 267.027963ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T10:40:47.502716Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.181423ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-22T10:40:47.502853Z","caller":"traceutil/trace.go:171","msg":"trace[344330702] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1180; }","duration":"141.339792ms","start":"2024-04-22T10:40:47.361506Z","end":"2024-04-22T10:40:47.502846Z","steps":["trace[344330702] 'agreement among raft nodes before linearized reading'  (duration: 141.196561ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T10:40:47.502994Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.703964ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14358"}
	{"level":"info","ts":"2024-04-22T10:40:47.503039Z","caller":"traceutil/trace.go:171","msg":"trace[1642437512] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1180; }","duration":"227.768994ms","start":"2024-04-22T10:40:47.275264Z","end":"2024-04-22T10:40:47.503033Z","steps":["trace[1642437512] 'agreement among raft nodes before linearized reading'  (duration: 227.671336ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T10:41:56.105094Z","caller":"traceutil/trace.go:171","msg":"trace[243768274] transaction","detail":"{read_only:false; response_revision:1320; number_of_response:1; }","duration":"355.024115ms","start":"2024-04-22T10:41:55.750051Z","end":"2024-04-22T10:41:56.105075Z","steps":["trace[243768274] 'process raft request'  (duration: 354.927316ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T10:41:56.105349Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-22T10:41:55.750037Z","time spent":"355.259656ms","remote":"127.0.0.1:50512","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1299 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2024-04-22T10:42:05.190135Z","caller":"traceutil/trace.go:171","msg":"trace[8659231] transaction","detail":"{read_only:false; response_revision:1369; number_of_response:1; }","duration":"185.39644ms","start":"2024-04-22T10:42:05.004718Z","end":"2024-04-22T10:42:05.190115Z","steps":["trace[8659231] 'process raft request'  (duration: 185.221295ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T10:42:25.670361Z","caller":"traceutil/trace.go:171","msg":"trace[1108447390] linearizableReadLoop","detail":"{readStateIndex:1628; appliedIndex:1627; }","duration":"218.332811ms","start":"2024-04-22T10:42:25.451993Z","end":"2024-04-22T10:42:25.670326Z","steps":["trace[1108447390] 'read index received'  (duration: 218.181293ms)","trace[1108447390] 'applied index is now lower than readState.Index'  (duration: 150.936µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-22T10:42:25.670919Z","caller":"traceutil/trace.go:171","msg":"trace[474003058] transaction","detail":"{read_only:false; response_revision:1562; number_of_response:1; }","duration":"341.338256ms","start":"2024-04-22T10:42:25.329561Z","end":"2024-04-22T10:42:25.670899Z","steps":["trace[474003058] 'process raft request'  (duration: 340.65397ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T10:42:25.671193Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-22T10:42:25.329544Z","time spent":"341.453615ms","remote":"127.0.0.1:50414","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1534 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-04-22T10:42:25.671452Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"219.450237ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/test-local-path\" ","response":"range_response_count:1 size:3596"}
	{"level":"info","ts":"2024-04-22T10:42:25.671574Z","caller":"traceutil/trace.go:171","msg":"trace[1084977975] range","detail":"{range_begin:/registry/pods/default/test-local-path; range_end:; response_count:1; response_revision:1562; }","duration":"219.592913ms","start":"2024-04-22T10:42:25.45197Z","end":"2024-04-22T10:42:25.671563Z","steps":["trace[1084977975] 'agreement among raft nodes before linearized reading'  (duration: 219.387997ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T10:42:25.672257Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.820562ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6861"}
	{"level":"info","ts":"2024-04-22T10:42:25.672288Z","caller":"traceutil/trace.go:171","msg":"trace[895097194] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1562; }","duration":"131.874499ms","start":"2024-04-22T10:42:25.540404Z","end":"2024-04-22T10:42:25.672279Z","steps":["trace[895097194] 'agreement among raft nodes before linearized reading'  (duration: 131.345061ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T10:43:19.734498Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.474785ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5517348271807919333 > lease_revoke:<id:4c918f056339f4a2>","response":"size:29"}
	
	
	==> gcp-auth [98ce745d0650b4b452c0bafed8251ffb60086e77a34f7a677a87af3eb5451dd6] <==
	2024/04/22 10:41:56 GCP Auth Webhook started!
	2024/04/22 10:41:58 Ready to marshal response ...
	2024/04/22 10:41:58 Ready to write response ...
	2024/04/22 10:42:02 Ready to marshal response ...
	2024/04/22 10:42:02 Ready to write response ...
	2024/04/22 10:42:08 Ready to marshal response ...
	2024/04/22 10:42:08 Ready to write response ...
	2024/04/22 10:42:15 Ready to marshal response ...
	2024/04/22 10:42:15 Ready to write response ...
	2024/04/22 10:42:15 Ready to marshal response ...
	2024/04/22 10:42:15 Ready to write response ...
	2024/04/22 10:42:27 Ready to marshal response ...
	2024/04/22 10:42:27 Ready to write response ...
	2024/04/22 10:42:29 Ready to marshal response ...
	2024/04/22 10:42:29 Ready to write response ...
	2024/04/22 10:42:34 Ready to marshal response ...
	2024/04/22 10:42:34 Ready to write response ...
	2024/04/22 10:42:36 Ready to marshal response ...
	2024/04/22 10:42:36 Ready to write response ...
	2024/04/22 10:42:36 Ready to marshal response ...
	2024/04/22 10:42:36 Ready to write response ...
	2024/04/22 10:42:36 Ready to marshal response ...
	2024/04/22 10:42:36 Ready to write response ...
	2024/04/22 10:44:57 Ready to marshal response ...
	2024/04/22 10:44:57 Ready to write response ...
	
	
	==> kernel <==
	 10:45:08 up 6 min,  0 users,  load average: 0.34, 0.91, 0.53
	Linux addons-649657 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [49a61e188d41b17b7cae7258b2e08215974cb51d7f7cb89893a9e4eb40fc5a3d] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0422 10:41:04.595077       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.190.243:443/apis/metrics.k8s.io/v1beta1: Get "https://10.101.190.243:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.101.190.243:443: connect: connection refused
	E0422 10:41:04.600511       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.190.243:443/apis/metrics.k8s.io/v1beta1: Get "https://10.101.190.243:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.101.190.243:443: connect: connection refused
	I0422 10:41:04.681258       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0422 10:42:13.738339       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0422 10:42:18.784548       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0422 10:42:19.874743       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0422 10:42:20.046694       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"gadget\" not found]"
	I0422 10:42:34.308356       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0422 10:42:34.539119       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.158.140"}
	I0422 10:42:36.806521       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.1.128"}
	I0422 10:42:47.717715       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0422 10:42:47.717749       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0422 10:42:47.752376       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0422 10:42:47.752479       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0422 10:42:47.758532       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0422 10:42:47.758601       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0422 10:42:47.765186       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0422 10:42:47.765266       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0422 10:42:47.828660       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0422 10:42:47.828736       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0422 10:42:48.759247       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0422 10:42:48.829584       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0422 10:42:48.850043       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0422 10:44:58.101394       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.207.48"}
	
	
	==> kube-controller-manager [e87c8e5071b3101013333015fc0e2d11e262168ef3ae336c3da95c8911871553] <==
	W0422 10:43:45.758583       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 10:43:45.758684       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 10:43:51.407245       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 10:43:51.407305       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 10:44:01.904022       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 10:44:01.904082       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 10:44:07.581953       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 10:44:07.582017       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 10:44:35.223749       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 10:44:35.224150       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 10:44:35.424701       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 10:44:35.424888       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 10:44:44.267187       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 10:44:44.267326       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 10:44:46.720465       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 10:44:46.720632       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0422 10:44:57.947396       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="56.344325ms"
	I0422 10:44:57.962903       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="15.405109ms"
	I0422 10:44:57.963239       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="81.492µs"
	I0422 10:44:57.968354       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="24.249µs"
	I0422 10:45:00.484594       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0422 10:45:00.502875       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0422 10:45:00.509194       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="3.908µs"
	I0422 10:45:02.673688       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="12.973512ms"
	I0422 10:45:02.673864       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="46.828µs"
	
	
	==> kube-proxy [cc4ae3f334eb8f15a02ce1cb74c938edda287420283c1625060ec6de34223cfc] <==
	I0422 10:39:21.152437       1 server_linux.go:69] "Using iptables proxy"
	I0422 10:39:21.171510       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.194"]
	I0422 10:39:21.280064       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 10:39:21.280132       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 10:39:21.280150       1 server_linux.go:165] "Using iptables Proxier"
	I0422 10:39:21.286056       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 10:39:21.286224       1 server.go:872] "Version info" version="v1.30.0"
	I0422 10:39:21.286263       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 10:39:21.288075       1 config.go:192] "Starting service config controller"
	I0422 10:39:21.288090       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 10:39:21.288107       1 config.go:101] "Starting endpoint slice config controller"
	I0422 10:39:21.288111       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 10:39:21.289966       1 config.go:319] "Starting node config controller"
	I0422 10:39:21.289975       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 10:39:21.388622       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0422 10:39:21.388667       1 shared_informer.go:320] Caches are synced for service config
	I0422 10:39:21.390001       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [228b3107896998ed67a5c41465c19156bb68c40d0b7d32997369f4ceea0e9199] <==
	E0422 10:39:02.515981       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0422 10:39:02.516055       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0422 10:39:02.516738       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0422 10:39:02.517181       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0422 10:39:02.517349       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0422 10:39:02.517475       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0422 10:39:02.517591       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0422 10:39:02.517723       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0422 10:39:03.391483       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0422 10:39:03.391527       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0422 10:39:03.391496       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0422 10:39:03.391550       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0422 10:39:03.521351       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0422 10:39:03.521410       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0422 10:39:03.697672       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0422 10:39:03.697886       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0422 10:39:03.698616       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0422 10:39:03.698663       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0422 10:39:03.837590       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0422 10:39:03.837689       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0422 10:39:03.841600       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0422 10:39:03.841681       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0422 10:39:04.061242       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0422 10:39:04.061511       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0422 10:39:06.602472       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 22 10:44:59 addons-649657 kubelet[1279]: I0422 10:44:59.254631    1279 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5c2q\" (UniqueName: \"kubernetes.io/projected/a8f74405-5f73-4306-a4ca-244216a00b42-kube-api-access-z5c2q\") pod \"a8f74405-5f73-4306-a4ca-244216a00b42\" (UID: \"a8f74405-5f73-4306-a4ca-244216a00b42\") "
	Apr 22 10:44:59 addons-649657 kubelet[1279]: I0422 10:44:59.259496    1279 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8f74405-5f73-4306-a4ca-244216a00b42-kube-api-access-z5c2q" (OuterVolumeSpecName: "kube-api-access-z5c2q") pod "a8f74405-5f73-4306-a4ca-244216a00b42" (UID: "a8f74405-5f73-4306-a4ca-244216a00b42"). InnerVolumeSpecName "kube-api-access-z5c2q". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 22 10:44:59 addons-649657 kubelet[1279]: I0422 10:44:59.355053    1279 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-z5c2q\" (UniqueName: \"kubernetes.io/projected/a8f74405-5f73-4306-a4ca-244216a00b42-kube-api-access-z5c2q\") on node \"addons-649657\" DevicePath \"\""
	Apr 22 10:44:59 addons-649657 kubelet[1279]: I0422 10:44:59.566158    1279 scope.go:117] "RemoveContainer" containerID="b3df6584aa2945a5f3cf36d99540e1618c244a2340c6293f7a72d6c1a21daed3"
	Apr 22 10:44:59 addons-649657 kubelet[1279]: I0422 10:44:59.597658    1279 scope.go:117] "RemoveContainer" containerID="b3df6584aa2945a5f3cf36d99540e1618c244a2340c6293f7a72d6c1a21daed3"
	Apr 22 10:44:59 addons-649657 kubelet[1279]: E0422 10:44:59.598412    1279 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3df6584aa2945a5f3cf36d99540e1618c244a2340c6293f7a72d6c1a21daed3\": container with ID starting with b3df6584aa2945a5f3cf36d99540e1618c244a2340c6293f7a72d6c1a21daed3 not found: ID does not exist" containerID="b3df6584aa2945a5f3cf36d99540e1618c244a2340c6293f7a72d6c1a21daed3"
	Apr 22 10:44:59 addons-649657 kubelet[1279]: I0422 10:44:59.598442    1279 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3df6584aa2945a5f3cf36d99540e1618c244a2340c6293f7a72d6c1a21daed3"} err="failed to get container status \"b3df6584aa2945a5f3cf36d99540e1618c244a2340c6293f7a72d6c1a21daed3\": rpc error: code = NotFound desc = could not find container \"b3df6584aa2945a5f3cf36d99540e1618c244a2340c6293f7a72d6c1a21daed3\": container with ID starting with b3df6584aa2945a5f3cf36d99540e1618c244a2340c6293f7a72d6c1a21daed3 not found: ID does not exist"
	Apr 22 10:45:01 addons-649657 kubelet[1279]: I0422 10:45:01.414994    1279 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bc48315-5789-48cd-8d01-8a8ee1c2e0be" path="/var/lib/kubelet/pods/0bc48315-5789-48cd-8d01-8a8ee1c2e0be/volumes"
	Apr 22 10:45:01 addons-649657 kubelet[1279]: I0422 10:45:01.415469    1279 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3531be8c-485c-43e8-8241-4466ed5350b1" path="/var/lib/kubelet/pods/3531be8c-485c-43e8-8241-4466ed5350b1/volumes"
	Apr 22 10:45:01 addons-649657 kubelet[1279]: I0422 10:45:01.415891    1279 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8f74405-5f73-4306-a4ca-244216a00b42" path="/var/lib/kubelet/pods/a8f74405-5f73-4306-a4ca-244216a00b42/volumes"
	Apr 22 10:45:03 addons-649657 kubelet[1279]: I0422 10:45:03.895753    1279 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60cd40d5-77eb-408e-ba25-eafeb255debe-webhook-cert\") pod \"60cd40d5-77eb-408e-ba25-eafeb255debe\" (UID: \"60cd40d5-77eb-408e-ba25-eafeb255debe\") "
	Apr 22 10:45:03 addons-649657 kubelet[1279]: I0422 10:45:03.895882    1279 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg4vf\" (UniqueName: \"kubernetes.io/projected/60cd40d5-77eb-408e-ba25-eafeb255debe-kube-api-access-mg4vf\") pod \"60cd40d5-77eb-408e-ba25-eafeb255debe\" (UID: \"60cd40d5-77eb-408e-ba25-eafeb255debe\") "
	Apr 22 10:45:03 addons-649657 kubelet[1279]: I0422 10:45:03.903029    1279 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60cd40d5-77eb-408e-ba25-eafeb255debe-kube-api-access-mg4vf" (OuterVolumeSpecName: "kube-api-access-mg4vf") pod "60cd40d5-77eb-408e-ba25-eafeb255debe" (UID: "60cd40d5-77eb-408e-ba25-eafeb255debe"). InnerVolumeSpecName "kube-api-access-mg4vf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 22 10:45:03 addons-649657 kubelet[1279]: I0422 10:45:03.903166    1279 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60cd40d5-77eb-408e-ba25-eafeb255debe-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "60cd40d5-77eb-408e-ba25-eafeb255debe" (UID: "60cd40d5-77eb-408e-ba25-eafeb255debe"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Apr 22 10:45:03 addons-649657 kubelet[1279]: I0422 10:45:03.996681    1279 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60cd40d5-77eb-408e-ba25-eafeb255debe-webhook-cert\") on node \"addons-649657\" DevicePath \"\""
	Apr 22 10:45:03 addons-649657 kubelet[1279]: I0422 10:45:03.996748    1279 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-mg4vf\" (UniqueName: \"kubernetes.io/projected/60cd40d5-77eb-408e-ba25-eafeb255debe-kube-api-access-mg4vf\") on node \"addons-649657\" DevicePath \"\""
	Apr 22 10:45:04 addons-649657 kubelet[1279]: I0422 10:45:04.666008    1279 scope.go:117] "RemoveContainer" containerID="19f0f8bc292ed00cc74849a2fcdcde095113be39e0934fea520248c65c0478b1"
	Apr 22 10:45:05 addons-649657 kubelet[1279]: I0422 10:45:05.412410    1279 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60cd40d5-77eb-408e-ba25-eafeb255debe" path="/var/lib/kubelet/pods/60cd40d5-77eb-408e-ba25-eafeb255debe/volumes"
	Apr 22 10:45:05 addons-649657 kubelet[1279]: E0422 10:45:05.458142    1279 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 10:45:05 addons-649657 kubelet[1279]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 10:45:05 addons-649657 kubelet[1279]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 10:45:05 addons-649657 kubelet[1279]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 10:45:05 addons-649657 kubelet[1279]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 10:45:07 addons-649657 kubelet[1279]: I0422 10:45:07.272704    1279 scope.go:117] "RemoveContainer" containerID="8e53e4efa53590c6fe4278ba7f05a2a48f730509a6aad04790cbcc6f87279ce5"
	Apr 22 10:45:07 addons-649657 kubelet[1279]: I0422 10:45:07.297196    1279 scope.go:117] "RemoveContainer" containerID="bfe7b37b7911c734bb2cecd23824a6d0f9e7fc0597db799d84ae3fdbfae185a6"
	
	
	==> storage-provisioner [dca8963cd05ebf8e5c2ab895728ad12bcce57a49e34622e946bd3d0130d46b17] <==
	I0422 10:39:26.741565       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0422 10:39:26.867175       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0422 10:39:26.884687       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0422 10:39:26.936432       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0422 10:39:26.950023       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-649657_98567884-08f1-4dd1-a87f-c9e2cb61138a!
	I0422 10:39:26.939941       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4aeef8a9-d3c9-4821-98b6-3a1ec921815c", APIVersion:"v1", ResourceVersion:"699", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-649657_98567884-08f1-4dd1-a87f-c9e2cb61138a became leader
	I0422 10:39:27.151028       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-649657_98567884-08f1-4dd1-a87f-c9e2cb61138a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-649657 -n addons-649657
helpers_test.go:261: (dbg) Run:  kubectl --context addons-649657 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (155.69s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (367.22s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 25.577792ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-phnbq" [ce74ad1e-3a35-470e-962e-901dcdc84a6d] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.011494154s
addons_test.go:415: (dbg) Run:  kubectl --context addons-649657 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-649657 top pods -n kube-system: exit status 1 (120.605352ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-2mxqp, age: 2m44.481425665s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-649657 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-649657 top pods -n kube-system: exit status 1 (70.689946ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-2mxqp, age: 2m48.249054879s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-649657 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-649657 top pods -n kube-system: exit status 1 (71.443781ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-2mxqp, age: 2m53.016734476s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-649657 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-649657 top pods -n kube-system: exit status 1 (69.302768ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-2mxqp, age: 2m58.794855107s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-649657 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-649657 top pods -n kube-system: exit status 1 (65.143822ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-2mxqp, age: 3m5.629800028s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-649657 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-649657 top pods -n kube-system: exit status 1 (81.318638ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-2mxqp, age: 3m18.979794108s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-649657 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-649657 top pods -n kube-system: exit status 1 (69.576803ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-2mxqp, age: 3m36.881667659s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-649657 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-649657 top pods -n kube-system: exit status 1 (64.20134ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-2mxqp, age: 4m14.888605409s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-649657 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-649657 top pods -n kube-system: exit status 1 (65.278382ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-2mxqp, age: 5m29.699639325s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-649657 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-649657 top pods -n kube-system: exit status 1 (61.760794ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-2mxqp, age: 6m27.541973413s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-649657 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-649657 top pods -n kube-system: exit status 1 (70.850194ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-2mxqp, age: 7m21.417476199s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-649657 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-649657 top pods -n kube-system: exit status 1 (65.642692ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-2mxqp, age: 8m43.35124854s

                                                
                                                
** /stderr **
addons_test.go:429: failed checking metric server: exit status 1
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-649657 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-649657 -n addons-649657
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-649657 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-649657 logs -n 25: (1.674557119s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.0 | 22 Apr 24 10:38 UTC | 22 Apr 24 10:38 UTC |
	| delete  | -p download-only-205366                                                                     | download-only-205366 | jenkins | v1.33.0 | 22 Apr 24 10:38 UTC | 22 Apr 24 10:38 UTC |
	| delete  | -p download-only-692083                                                                     | download-only-692083 | jenkins | v1.33.0 | 22 Apr 24 10:38 UTC | 22 Apr 24 10:38 UTC |
	| delete  | -p download-only-205366                                                                     | download-only-205366 | jenkins | v1.33.0 | 22 Apr 24 10:38 UTC | 22 Apr 24 10:38 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-683094 | jenkins | v1.33.0 | 22 Apr 24 10:38 UTC |                     |
	|         | binary-mirror-683094                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:40437                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-683094                                                                     | binary-mirror-683094 | jenkins | v1.33.0 | 22 Apr 24 10:38 UTC | 22 Apr 24 10:38 UTC |
	| addons  | enable dashboard -p                                                                         | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:38 UTC |                     |
	|         | addons-649657                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:38 UTC |                     |
	|         | addons-649657                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-649657 --wait=true                                                                | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:38 UTC | 22 Apr 24 10:41 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-649657 addons disable                                                                | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:42 UTC | 22 Apr 24 10:42 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-649657 ip                                                                            | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:42 UTC | 22 Apr 24 10:42 UTC |
	| addons  | addons-649657 addons disable                                                                | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:42 UTC | 22 Apr 24 10:42 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:42 UTC | 22 Apr 24 10:42 UTC |
	|         | addons-649657                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-649657 ssh cat                                                                       | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:42 UTC | 22 Apr 24 10:42 UTC |
	|         | /opt/local-path-provisioner/pvc-60f66f58-3d14-4dd8-976b-05bdb591f503_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-649657 addons disable                                                                | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:42 UTC | 22 Apr 24 10:42 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:42 UTC | 22 Apr 24 10:42 UTC |
	|         | -p addons-649657                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:42 UTC | 22 Apr 24 10:42 UTC |
	|         | addons-649657                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:42 UTC | 22 Apr 24 10:42 UTC |
	|         | -p addons-649657                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-649657 addons                                                                        | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:42 UTC | 22 Apr 24 10:42 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-649657 ssh curl -s                                                                   | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:42 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-649657 addons                                                                        | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:42 UTC | 22 Apr 24 10:42 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-649657 ip                                                                            | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:44 UTC | 22 Apr 24 10:44 UTC |
	| addons  | addons-649657 addons disable                                                                | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:44 UTC | 22 Apr 24 10:44 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-649657 addons disable                                                                | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:44 UTC | 22 Apr 24 10:45 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-649657 addons                                                                        | addons-649657        | jenkins | v1.33.0 | 22 Apr 24 10:48 UTC | 22 Apr 24 10:48 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 10:38:23
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 10:38:23.079810   15606 out.go:291] Setting OutFile to fd 1 ...
	I0422 10:38:23.080046   15606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 10:38:23.080054   15606 out.go:304] Setting ErrFile to fd 2...
	I0422 10:38:23.080059   15606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 10:38:23.080271   15606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
	I0422 10:38:23.080916   15606 out.go:298] Setting JSON to false
	I0422 10:38:23.081723   15606 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1246,"bootTime":1713781057,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 10:38:23.081781   15606 start.go:139] virtualization: kvm guest
	I0422 10:38:23.083798   15606 out.go:177] * [addons-649657] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 10:38:23.085280   15606 out.go:177]   - MINIKUBE_LOCATION=18711
	I0422 10:38:23.085241   15606 notify.go:220] Checking for updates...
	I0422 10:38:23.086718   15606 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 10:38:23.088063   15606 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18711-7633/kubeconfig
	I0422 10:38:23.089354   15606 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 10:38:23.090612   15606 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 10:38:23.091947   15606 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 10:38:23.093438   15606 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 10:38:23.124707   15606 out.go:177] * Using the kvm2 driver based on user configuration
	I0422 10:38:23.126058   15606 start.go:297] selected driver: kvm2
	I0422 10:38:23.126074   15606 start.go:901] validating driver "kvm2" against <nil>
	I0422 10:38:23.126089   15606 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 10:38:23.126747   15606 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 10:38:23.126830   15606 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18711-7633/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 10:38:23.140835   15606 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 10:38:23.140884   15606 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0422 10:38:23.141113   15606 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 10:38:23.141189   15606 cni.go:84] Creating CNI manager for ""
	I0422 10:38:23.141205   15606 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 10:38:23.141215   15606 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0422 10:38:23.141274   15606 start.go:340] cluster config:
	{Name:addons-649657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-649657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 10:38:23.141370   15606 iso.go:125] acquiring lock: {Name:mkb6ac9fd17ffabc92a94047094130aad6203a95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 10:38:23.144113   15606 out.go:177] * Starting "addons-649657" primary control-plane node in "addons-649657" cluster
	I0422 10:38:23.145265   15606 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 10:38:23.145301   15606 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0422 10:38:23.145313   15606 cache.go:56] Caching tarball of preloaded images
	I0422 10:38:23.145394   15606 preload.go:173] Found /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 10:38:23.145406   15606 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 10:38:23.145690   15606 profile.go:143] Saving config to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/config.json ...
	I0422 10:38:23.145715   15606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/config.json: {Name:mk9bfe842d09f1f35d378a2cdb4c6d5de6c57750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 10:38:23.145841   15606 start.go:360] acquireMachinesLock for addons-649657: {Name:mk5cb9b294e703b264c1f97ac968ffd01e93b576 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 10:38:23.145898   15606 start.go:364] duration metric: took 41.92µs to acquireMachinesLock for "addons-649657"
	I0422 10:38:23.145930   15606 start.go:93] Provisioning new machine with config: &{Name:addons-649657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterNa
me:addons-649657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 10:38:23.146001   15606 start.go:125] createHost starting for "" (driver="kvm2")
	I0422 10:38:23.147636   15606 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0422 10:38:23.147754   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:38:23.147795   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:38:23.161398   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36741
	I0422 10:38:23.161866   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:38:23.162402   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:38:23.162423   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:38:23.162793   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:38:23.163031   15606 main.go:141] libmachine: (addons-649657) Calling .GetMachineName
	I0422 10:38:23.163187   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:38:23.163371   15606 start.go:159] libmachine.API.Create for "addons-649657" (driver="kvm2")
	I0422 10:38:23.163408   15606 client.go:168] LocalClient.Create starting
	I0422 10:38:23.163449   15606 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem
	I0422 10:38:23.231391   15606 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem
	I0422 10:38:23.364680   15606 main.go:141] libmachine: Running pre-create checks...
	I0422 10:38:23.364704   15606 main.go:141] libmachine: (addons-649657) Calling .PreCreateCheck
	I0422 10:38:23.365240   15606 main.go:141] libmachine: (addons-649657) Calling .GetConfigRaw
	I0422 10:38:23.365667   15606 main.go:141] libmachine: Creating machine...
	I0422 10:38:23.365683   15606 main.go:141] libmachine: (addons-649657) Calling .Create
	I0422 10:38:23.365837   15606 main.go:141] libmachine: (addons-649657) Creating KVM machine...
	I0422 10:38:23.366932   15606 main.go:141] libmachine: (addons-649657) DBG | found existing default KVM network
	I0422 10:38:23.367818   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:23.367670   15628 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0422 10:38:23.367858   15606 main.go:141] libmachine: (addons-649657) DBG | created network xml: 
	I0422 10:38:23.367882   15606 main.go:141] libmachine: (addons-649657) DBG | <network>
	I0422 10:38:23.367896   15606 main.go:141] libmachine: (addons-649657) DBG |   <name>mk-addons-649657</name>
	I0422 10:38:23.367909   15606 main.go:141] libmachine: (addons-649657) DBG |   <dns enable='no'/>
	I0422 10:38:23.367918   15606 main.go:141] libmachine: (addons-649657) DBG |   
	I0422 10:38:23.367932   15606 main.go:141] libmachine: (addons-649657) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0422 10:38:23.367943   15606 main.go:141] libmachine: (addons-649657) DBG |     <dhcp>
	I0422 10:38:23.367953   15606 main.go:141] libmachine: (addons-649657) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0422 10:38:23.367964   15606 main.go:141] libmachine: (addons-649657) DBG |     </dhcp>
	I0422 10:38:23.367975   15606 main.go:141] libmachine: (addons-649657) DBG |   </ip>
	I0422 10:38:23.367985   15606 main.go:141] libmachine: (addons-649657) DBG |   
	I0422 10:38:23.367995   15606 main.go:141] libmachine: (addons-649657) DBG | </network>
	I0422 10:38:23.368009   15606 main.go:141] libmachine: (addons-649657) DBG | 
	I0422 10:38:23.373183   15606 main.go:141] libmachine: (addons-649657) DBG | trying to create private KVM network mk-addons-649657 192.168.39.0/24...
	I0422 10:38:23.437105   15606 main.go:141] libmachine: (addons-649657) DBG | private KVM network mk-addons-649657 192.168.39.0/24 created
	I0422 10:38:23.437182   15606 main.go:141] libmachine: (addons-649657) Setting up store path in /home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657 ...
	I0422 10:38:23.437213   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:23.437102   15628 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 10:38:23.437231   15606 main.go:141] libmachine: (addons-649657) Building disk image from file:///home/jenkins/minikube-integration/18711-7633/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0422 10:38:23.437263   15606 main.go:141] libmachine: (addons-649657) Downloading /home/jenkins/minikube-integration/18711-7633/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18711-7633/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0422 10:38:23.664867   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:23.664712   15628 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa...
	I0422 10:38:23.779577   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:23.779438   15628 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/addons-649657.rawdisk...
	I0422 10:38:23.779601   15606 main.go:141] libmachine: (addons-649657) DBG | Writing magic tar header
	I0422 10:38:23.779614   15606 main.go:141] libmachine: (addons-649657) DBG | Writing SSH key tar header
	I0422 10:38:23.779625   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:23.779554   15628 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657 ...
	I0422 10:38:23.779645   15606 main.go:141] libmachine: (addons-649657) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657
	I0422 10:38:23.779670   15606 main.go:141] libmachine: (addons-649657) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657 (perms=drwx------)
	I0422 10:38:23.779680   15606 main.go:141] libmachine: (addons-649657) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633/.minikube/machines (perms=drwxr-xr-x)
	I0422 10:38:23.779712   15606 main.go:141] libmachine: (addons-649657) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633/.minikube/machines
	I0422 10:38:23.779769   15606 main.go:141] libmachine: (addons-649657) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633/.minikube (perms=drwxr-xr-x)
	I0422 10:38:23.779786   15606 main.go:141] libmachine: (addons-649657) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 10:38:23.779795   15606 main.go:141] libmachine: (addons-649657) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633
	I0422 10:38:23.779800   15606 main.go:141] libmachine: (addons-649657) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0422 10:38:23.779808   15606 main.go:141] libmachine: (addons-649657) DBG | Checking permissions on dir: /home/jenkins
	I0422 10:38:23.779816   15606 main.go:141] libmachine: (addons-649657) DBG | Checking permissions on dir: /home
	I0422 10:38:23.779828   15606 main.go:141] libmachine: (addons-649657) DBG | Skipping /home - not owner
	I0422 10:38:23.779881   15606 main.go:141] libmachine: (addons-649657) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633 (perms=drwxrwxr-x)
	I0422 10:38:23.779905   15606 main.go:141] libmachine: (addons-649657) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0422 10:38:23.779915   15606 main.go:141] libmachine: (addons-649657) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0422 10:38:23.779926   15606 main.go:141] libmachine: (addons-649657) Creating domain...
	I0422 10:38:23.781015   15606 main.go:141] libmachine: (addons-649657) define libvirt domain using xml: 
	I0422 10:38:23.781039   15606 main.go:141] libmachine: (addons-649657) <domain type='kvm'>
	I0422 10:38:23.781050   15606 main.go:141] libmachine: (addons-649657)   <name>addons-649657</name>
	I0422 10:38:23.781058   15606 main.go:141] libmachine: (addons-649657)   <memory unit='MiB'>4000</memory>
	I0422 10:38:23.781068   15606 main.go:141] libmachine: (addons-649657)   <vcpu>2</vcpu>
	I0422 10:38:23.781087   15606 main.go:141] libmachine: (addons-649657)   <features>
	I0422 10:38:23.781120   15606 main.go:141] libmachine: (addons-649657)     <acpi/>
	I0422 10:38:23.781223   15606 main.go:141] libmachine: (addons-649657)     <apic/>
	I0422 10:38:23.781246   15606 main.go:141] libmachine: (addons-649657)     <pae/>
	I0422 10:38:23.781257   15606 main.go:141] libmachine: (addons-649657)     
	I0422 10:38:23.781264   15606 main.go:141] libmachine: (addons-649657)   </features>
	I0422 10:38:23.781273   15606 main.go:141] libmachine: (addons-649657)   <cpu mode='host-passthrough'>
	I0422 10:38:23.781286   15606 main.go:141] libmachine: (addons-649657)   
	I0422 10:38:23.781317   15606 main.go:141] libmachine: (addons-649657)   </cpu>
	I0422 10:38:23.781337   15606 main.go:141] libmachine: (addons-649657)   <os>
	I0422 10:38:23.781350   15606 main.go:141] libmachine: (addons-649657)     <type>hvm</type>
	I0422 10:38:23.781361   15606 main.go:141] libmachine: (addons-649657)     <boot dev='cdrom'/>
	I0422 10:38:23.781372   15606 main.go:141] libmachine: (addons-649657)     <boot dev='hd'/>
	I0422 10:38:23.781383   15606 main.go:141] libmachine: (addons-649657)     <bootmenu enable='no'/>
	I0422 10:38:23.781393   15606 main.go:141] libmachine: (addons-649657)   </os>
	I0422 10:38:23.781403   15606 main.go:141] libmachine: (addons-649657)   <devices>
	I0422 10:38:23.781415   15606 main.go:141] libmachine: (addons-649657)     <disk type='file' device='cdrom'>
	I0422 10:38:23.781438   15606 main.go:141] libmachine: (addons-649657)       <source file='/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/boot2docker.iso'/>
	I0422 10:38:23.781454   15606 main.go:141] libmachine: (addons-649657)       <target dev='hdc' bus='scsi'/>
	I0422 10:38:23.781465   15606 main.go:141] libmachine: (addons-649657)       <readonly/>
	I0422 10:38:23.781477   15606 main.go:141] libmachine: (addons-649657)     </disk>
	I0422 10:38:23.781489   15606 main.go:141] libmachine: (addons-649657)     <disk type='file' device='disk'>
	I0422 10:38:23.781510   15606 main.go:141] libmachine: (addons-649657)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0422 10:38:23.781531   15606 main.go:141] libmachine: (addons-649657)       <source file='/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/addons-649657.rawdisk'/>
	I0422 10:38:23.781549   15606 main.go:141] libmachine: (addons-649657)       <target dev='hda' bus='virtio'/>
	I0422 10:38:23.781556   15606 main.go:141] libmachine: (addons-649657)     </disk>
	I0422 10:38:23.781568   15606 main.go:141] libmachine: (addons-649657)     <interface type='network'>
	I0422 10:38:23.781580   15606 main.go:141] libmachine: (addons-649657)       <source network='mk-addons-649657'/>
	I0422 10:38:23.781592   15606 main.go:141] libmachine: (addons-649657)       <model type='virtio'/>
	I0422 10:38:23.781603   15606 main.go:141] libmachine: (addons-649657)     </interface>
	I0422 10:38:23.781614   15606 main.go:141] libmachine: (addons-649657)     <interface type='network'>
	I0422 10:38:23.781626   15606 main.go:141] libmachine: (addons-649657)       <source network='default'/>
	I0422 10:38:23.781638   15606 main.go:141] libmachine: (addons-649657)       <model type='virtio'/>
	I0422 10:38:23.781648   15606 main.go:141] libmachine: (addons-649657)     </interface>
	I0422 10:38:23.781660   15606 main.go:141] libmachine: (addons-649657)     <serial type='pty'>
	I0422 10:38:23.781670   15606 main.go:141] libmachine: (addons-649657)       <target port='0'/>
	I0422 10:38:23.781681   15606 main.go:141] libmachine: (addons-649657)     </serial>
	I0422 10:38:23.781692   15606 main.go:141] libmachine: (addons-649657)     <console type='pty'>
	I0422 10:38:23.781713   15606 main.go:141] libmachine: (addons-649657)       <target type='serial' port='0'/>
	I0422 10:38:23.781726   15606 main.go:141] libmachine: (addons-649657)     </console>
	I0422 10:38:23.781736   15606 main.go:141] libmachine: (addons-649657)     <rng model='virtio'>
	I0422 10:38:23.781748   15606 main.go:141] libmachine: (addons-649657)       <backend model='random'>/dev/random</backend>
	I0422 10:38:23.781759   15606 main.go:141] libmachine: (addons-649657)     </rng>
	I0422 10:38:23.781770   15606 main.go:141] libmachine: (addons-649657)     
	I0422 10:38:23.781782   15606 main.go:141] libmachine: (addons-649657)     
	I0422 10:38:23.781792   15606 main.go:141] libmachine: (addons-649657)   </devices>
	I0422 10:38:23.781807   15606 main.go:141] libmachine: (addons-649657) </domain>
	I0422 10:38:23.781816   15606 main.go:141] libmachine: (addons-649657) 
	I0422 10:38:23.787082   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:22:b9:16 in network default
	I0422 10:38:23.787647   15606 main.go:141] libmachine: (addons-649657) Ensuring networks are active...
	I0422 10:38:23.787663   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:23.788321   15606 main.go:141] libmachine: (addons-649657) Ensuring network default is active
	I0422 10:38:23.788689   15606 main.go:141] libmachine: (addons-649657) Ensuring network mk-addons-649657 is active
	I0422 10:38:23.789159   15606 main.go:141] libmachine: (addons-649657) Getting domain xml...
	I0422 10:38:23.789765   15606 main.go:141] libmachine: (addons-649657) Creating domain...
	I0422 10:38:25.195662   15606 main.go:141] libmachine: (addons-649657) Waiting to get IP...
	I0422 10:38:25.196442   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:25.196852   15606 main.go:141] libmachine: (addons-649657) DBG | unable to find current IP address of domain addons-649657 in network mk-addons-649657
	I0422 10:38:25.196888   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:25.196826   15628 retry.go:31] will retry after 232.878498ms: waiting for machine to come up
	I0422 10:38:25.431389   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:25.431780   15606 main.go:141] libmachine: (addons-649657) DBG | unable to find current IP address of domain addons-649657 in network mk-addons-649657
	I0422 10:38:25.431848   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:25.431770   15628 retry.go:31] will retry after 346.743722ms: waiting for machine to come up
	I0422 10:38:25.780676   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:25.781106   15606 main.go:141] libmachine: (addons-649657) DBG | unable to find current IP address of domain addons-649657 in network mk-addons-649657
	I0422 10:38:25.781130   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:25.781068   15628 retry.go:31] will retry after 437.70568ms: waiting for machine to come up
	I0422 10:38:26.220719   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:26.221177   15606 main.go:141] libmachine: (addons-649657) DBG | unable to find current IP address of domain addons-649657 in network mk-addons-649657
	I0422 10:38:26.221200   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:26.221148   15628 retry.go:31] will retry after 438.886905ms: waiting for machine to come up
	I0422 10:38:26.661712   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:26.662109   15606 main.go:141] libmachine: (addons-649657) DBG | unable to find current IP address of domain addons-649657 in network mk-addons-649657
	I0422 10:38:26.662144   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:26.662065   15628 retry.go:31] will retry after 503.335056ms: waiting for machine to come up
	I0422 10:38:27.166635   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:27.167072   15606 main.go:141] libmachine: (addons-649657) DBG | unable to find current IP address of domain addons-649657 in network mk-addons-649657
	I0422 10:38:27.167126   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:27.167012   15628 retry.go:31] will retry after 798.067912ms: waiting for machine to come up
	I0422 10:38:27.967000   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:27.967462   15606 main.go:141] libmachine: (addons-649657) DBG | unable to find current IP address of domain addons-649657 in network mk-addons-649657
	I0422 10:38:27.967494   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:27.967412   15628 retry.go:31] will retry after 775.145721ms: waiting for machine to come up
	I0422 10:38:28.744013   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:28.744366   15606 main.go:141] libmachine: (addons-649657) DBG | unable to find current IP address of domain addons-649657 in network mk-addons-649657
	I0422 10:38:28.744389   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:28.744336   15628 retry.go:31] will retry after 1.114755525s: waiting for machine to come up
	I0422 10:38:29.860547   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:29.860983   15606 main.go:141] libmachine: (addons-649657) DBG | unable to find current IP address of domain addons-649657 in network mk-addons-649657
	I0422 10:38:29.861013   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:29.860902   15628 retry.go:31] will retry after 1.404380425s: waiting for machine to come up
	I0422 10:38:31.267452   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:31.267888   15606 main.go:141] libmachine: (addons-649657) DBG | unable to find current IP address of domain addons-649657 in network mk-addons-649657
	I0422 10:38:31.267914   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:31.267841   15628 retry.go:31] will retry after 2.048742661s: waiting for machine to come up
	I0422 10:38:33.318039   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:33.318537   15606 main.go:141] libmachine: (addons-649657) DBG | unable to find current IP address of domain addons-649657 in network mk-addons-649657
	I0422 10:38:33.318566   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:33.318500   15628 retry.go:31] will retry after 2.397547405s: waiting for machine to come up
	I0422 10:38:35.718109   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:35.718472   15606 main.go:141] libmachine: (addons-649657) DBG | unable to find current IP address of domain addons-649657 in network mk-addons-649657
	I0422 10:38:35.718491   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:35.718443   15628 retry.go:31] will retry after 2.840628225s: waiting for machine to come up
	I0422 10:38:38.562290   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:38.562755   15606 main.go:141] libmachine: (addons-649657) DBG | unable to find current IP address of domain addons-649657 in network mk-addons-649657
	I0422 10:38:38.562784   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:38.562721   15628 retry.go:31] will retry after 3.644606309s: waiting for machine to come up
	I0422 10:38:42.208497   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:42.208800   15606 main.go:141] libmachine: (addons-649657) DBG | unable to find current IP address of domain addons-649657 in network mk-addons-649657
	I0422 10:38:42.208819   15606 main.go:141] libmachine: (addons-649657) DBG | I0422 10:38:42.208758   15628 retry.go:31] will retry after 4.598552626s: waiting for machine to come up
	I0422 10:38:46.811357   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:46.811868   15606 main.go:141] libmachine: (addons-649657) Found IP for machine: 192.168.39.194
	I0422 10:38:46.811893   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has current primary IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:46.811900   15606 main.go:141] libmachine: (addons-649657) Reserving static IP address...
	I0422 10:38:46.812299   15606 main.go:141] libmachine: (addons-649657) DBG | unable to find host DHCP lease matching {name: "addons-649657", mac: "52:54:00:fd:fb:c8", ip: "192.168.39.194"} in network mk-addons-649657
	I0422 10:38:46.880741   15606 main.go:141] libmachine: (addons-649657) Reserved static IP address: 192.168.39.194
	I0422 10:38:46.880816   15606 main.go:141] libmachine: (addons-649657) Waiting for SSH to be available...
	I0422 10:38:46.880833   15606 main.go:141] libmachine: (addons-649657) DBG | Getting to WaitForSSH function...
	I0422 10:38:46.883253   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:46.883720   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:46.883770   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:46.883946   15606 main.go:141] libmachine: (addons-649657) DBG | Using SSH client type: external
	I0422 10:38:46.883975   15606 main.go:141] libmachine: (addons-649657) DBG | Using SSH private key: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa (-rw-------)
	I0422 10:38:46.884005   15606 main.go:141] libmachine: (addons-649657) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.194 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 10:38:46.884021   15606 main.go:141] libmachine: (addons-649657) DBG | About to run SSH command:
	I0422 10:38:46.884036   15606 main.go:141] libmachine: (addons-649657) DBG | exit 0
	I0422 10:38:47.013234   15606 main.go:141] libmachine: (addons-649657) DBG | SSH cmd err, output: <nil>: 
	I0422 10:38:47.013518   15606 main.go:141] libmachine: (addons-649657) KVM machine creation complete!
	I0422 10:38:47.013809   15606 main.go:141] libmachine: (addons-649657) Calling .GetConfigRaw
	I0422 10:38:47.014321   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:38:47.014484   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:38:47.014646   15606 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0422 10:38:47.014663   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:38:47.015922   15606 main.go:141] libmachine: Detecting operating system of created instance...
	I0422 10:38:47.015936   15606 main.go:141] libmachine: Waiting for SSH to be available...
	I0422 10:38:47.015942   15606 main.go:141] libmachine: Getting to WaitForSSH function...
	I0422 10:38:47.015948   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:38:47.018209   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.018570   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:47.018601   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.018707   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:38:47.018860   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:38:47.019032   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:38:47.019164   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:38:47.019335   15606 main.go:141] libmachine: Using SSH client type: native
	I0422 10:38:47.019544   15606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0422 10:38:47.019559   15606 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0422 10:38:47.120092   15606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 10:38:47.120120   15606 main.go:141] libmachine: Detecting the provisioner...
	I0422 10:38:47.120130   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:38:47.122651   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.122999   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:47.123027   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.123137   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:38:47.123289   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:38:47.123420   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:38:47.123565   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:38:47.123725   15606 main.go:141] libmachine: Using SSH client type: native
	I0422 10:38:47.123875   15606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0422 10:38:47.123885   15606 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0422 10:38:47.230101   15606 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0422 10:38:47.230179   15606 main.go:141] libmachine: found compatible host: buildroot
	I0422 10:38:47.230191   15606 main.go:141] libmachine: Provisioning with buildroot...
	I0422 10:38:47.230203   15606 main.go:141] libmachine: (addons-649657) Calling .GetMachineName
	I0422 10:38:47.230476   15606 buildroot.go:166] provisioning hostname "addons-649657"
	I0422 10:38:47.230499   15606 main.go:141] libmachine: (addons-649657) Calling .GetMachineName
	I0422 10:38:47.230682   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:38:47.233015   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.233345   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:47.233374   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.233493   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:38:47.233665   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:38:47.233796   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:38:47.233932   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:38:47.234098   15606 main.go:141] libmachine: Using SSH client type: native
	I0422 10:38:47.234265   15606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0422 10:38:47.234276   15606 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-649657 && echo "addons-649657" | sudo tee /etc/hostname
	I0422 10:38:47.354203   15606 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-649657
	
	I0422 10:38:47.354226   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:38:47.356552   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.356911   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:47.356941   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.357090   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:38:47.357263   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:38:47.357419   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:38:47.357546   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:38:47.357728   15606 main.go:141] libmachine: Using SSH client type: native
	I0422 10:38:47.357904   15606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0422 10:38:47.357927   15606 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-649657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-649657/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-649657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 10:38:47.471583   15606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 10:38:47.471614   15606 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18711-7633/.minikube CaCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18711-7633/.minikube}
	I0422 10:38:47.471643   15606 buildroot.go:174] setting up certificates
	I0422 10:38:47.471658   15606 provision.go:84] configureAuth start
	I0422 10:38:47.471669   15606 main.go:141] libmachine: (addons-649657) Calling .GetMachineName
	I0422 10:38:47.471961   15606 main.go:141] libmachine: (addons-649657) Calling .GetIP
	I0422 10:38:47.474574   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.474929   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:47.474954   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.475091   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:38:47.476911   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.477192   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:47.477215   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.477277   15606 provision.go:143] copyHostCerts
	I0422 10:38:47.477348   15606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem (1078 bytes)
	I0422 10:38:47.477491   15606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem (1123 bytes)
	I0422 10:38:47.477570   15606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem (1679 bytes)
	I0422 10:38:47.477634   15606 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem org=jenkins.addons-649657 san=[127.0.0.1 192.168.39.194 addons-649657 localhost minikube]
	I0422 10:38:47.541200   15606 provision.go:177] copyRemoteCerts
	I0422 10:38:47.541260   15606 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 10:38:47.541281   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:38:47.543814   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.544125   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:47.544150   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.544321   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:38:47.544499   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:38:47.544622   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:38:47.544751   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:38:47.628141   15606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 10:38:47.655092   15606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0422 10:38:47.681181   15606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 10:38:47.707974   15606 provision.go:87] duration metric: took 236.304055ms to configureAuth
	I0422 10:38:47.708004   15606 buildroot.go:189] setting minikube options for container-runtime
	I0422 10:38:47.708190   15606 config.go:182] Loaded profile config "addons-649657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 10:38:47.708283   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:38:47.710930   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.711266   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:47.711288   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.711500   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:38:47.711683   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:38:47.711840   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:38:47.711940   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:38:47.712074   15606 main.go:141] libmachine: Using SSH client type: native
	I0422 10:38:47.712260   15606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0422 10:38:47.712278   15606 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 10:38:47.983213   15606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 10:38:47.983246   15606 main.go:141] libmachine: Checking connection to Docker...
	I0422 10:38:47.983257   15606 main.go:141] libmachine: (addons-649657) Calling .GetURL
	I0422 10:38:47.984383   15606 main.go:141] libmachine: (addons-649657) DBG | Using libvirt version 6000000
	I0422 10:38:47.986258   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.986571   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:47.986603   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.986755   15606 main.go:141] libmachine: Docker is up and running!
	I0422 10:38:47.986770   15606 main.go:141] libmachine: Reticulating splines...
	I0422 10:38:47.986782   15606 client.go:171] duration metric: took 24.82335883s to LocalClient.Create
	I0422 10:38:47.986808   15606 start.go:167] duration metric: took 24.823438049s to libmachine.API.Create "addons-649657"
	I0422 10:38:47.986823   15606 start.go:293] postStartSetup for "addons-649657" (driver="kvm2")
	I0422 10:38:47.986838   15606 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 10:38:47.986863   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:38:47.987084   15606 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 10:38:47.987106   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:38:47.988811   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.989089   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:47.989114   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:47.989275   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:38:47.989415   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:38:47.989574   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:38:47.989659   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:38:48.071918   15606 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 10:38:48.076737   15606 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 10:38:48.076761   15606 filesync.go:126] Scanning /home/jenkins/minikube-integration/18711-7633/.minikube/addons for local assets ...
	I0422 10:38:48.076856   15606 filesync.go:126] Scanning /home/jenkins/minikube-integration/18711-7633/.minikube/files for local assets ...
	I0422 10:38:48.076892   15606 start.go:296] duration metric: took 90.061633ms for postStartSetup
	I0422 10:38:48.076938   15606 main.go:141] libmachine: (addons-649657) Calling .GetConfigRaw
	I0422 10:38:48.077466   15606 main.go:141] libmachine: (addons-649657) Calling .GetIP
	I0422 10:38:48.079825   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:48.080163   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:48.080396   15606 profile.go:143] Saving config to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/config.json ...
	I0422 10:38:48.082002   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:48.082184   15606 start.go:128] duration metric: took 24.936172013s to createHost
	I0422 10:38:48.082207   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:38:48.084060   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:48.084380   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:48.084409   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:48.084478   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:38:48.084656   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:38:48.084812   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:38:48.084982   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:38:48.085127   15606 main.go:141] libmachine: Using SSH client type: native
	I0422 10:38:48.085307   15606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0422 10:38:48.085321   15606 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 10:38:48.185835   15606 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713782328.147187196
	
	I0422 10:38:48.185864   15606 fix.go:216] guest clock: 1713782328.147187196
	I0422 10:38:48.185874   15606 fix.go:229] Guest: 2024-04-22 10:38:48.147187196 +0000 UTC Remote: 2024-04-22 10:38:48.082197786 +0000 UTC m=+25.046682825 (delta=64.98941ms)
	I0422 10:38:48.185913   15606 fix.go:200] guest clock delta is within tolerance: 64.98941ms
	I0422 10:38:48.185918   15606 start.go:83] releasing machines lock for "addons-649657", held for 25.040010037s
	I0422 10:38:48.185937   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:38:48.186152   15606 main.go:141] libmachine: (addons-649657) Calling .GetIP
	I0422 10:38:48.188797   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:48.189155   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:48.189185   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:48.189338   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:38:48.189784   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:38:48.189962   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:38:48.190085   15606 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 10:38:48.190131   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:38:48.190184   15606 ssh_runner.go:195] Run: cat /version.json
	I0422 10:38:48.190210   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:38:48.193037   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:48.193127   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:48.193372   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:48.193399   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:48.193443   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:48.193473   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:48.193524   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:38:48.193656   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:38:48.193726   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:38:48.193802   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:38:48.193863   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:38:48.193905   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:38:48.193971   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:38:48.194039   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:38:48.294301   15606 ssh_runner.go:195] Run: systemctl --version
	I0422 10:38:48.300532   15606 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 10:38:48.466397   15606 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 10:38:48.477261   15606 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 10:38:48.477332   15606 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 10:38:48.495808   15606 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 10:38:48.495832   15606 start.go:494] detecting cgroup driver to use...
	I0422 10:38:48.495895   15606 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 10:38:48.515026   15606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 10:38:48.530178   15606 docker.go:217] disabling cri-docker service (if available) ...
	I0422 10:38:48.530238   15606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 10:38:48.544539   15606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 10:38:48.559329   15606 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 10:38:48.676790   15606 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 10:38:48.821698   15606 docker.go:233] disabling docker service ...
	I0422 10:38:48.821769   15606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 10:38:48.836738   15606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 10:38:48.850871   15606 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 10:38:48.988823   15606 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 10:38:49.113820   15606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 10:38:49.129599   15606 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 10:38:49.150667   15606 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 10:38:49.150721   15606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 10:38:49.163900   15606 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 10:38:49.163978   15606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 10:38:49.176687   15606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 10:38:49.189298   15606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 10:38:49.201573   15606 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 10:38:49.214322   15606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 10:38:49.227157   15606 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 10:38:49.247102   15606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 10:38:49.260407   15606 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 10:38:49.272003   15606 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 10:38:49.272068   15606 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 10:38:49.287435   15606 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 10:38:49.298831   15606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 10:38:49.411046   15606 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 10:38:49.561202   15606 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 10:38:49.561299   15606 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 10:38:49.566882   15606 start.go:562] Will wait 60s for crictl version
	I0422 10:38:49.566951   15606 ssh_runner.go:195] Run: which crictl
	I0422 10:38:49.571336   15606 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 10:38:49.607007   15606 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 10:38:49.607126   15606 ssh_runner.go:195] Run: crio --version
	I0422 10:38:49.643734   15606 ssh_runner.go:195] Run: crio --version
	I0422 10:38:49.674961   15606 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 10:38:49.676271   15606 main.go:141] libmachine: (addons-649657) Calling .GetIP
	I0422 10:38:49.678947   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:49.679310   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:38:49.679338   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:38:49.679516   15606 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0422 10:38:49.683969   15606 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 10:38:49.698457   15606 kubeadm.go:877] updating cluster {Name:addons-649657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-649657 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 10:38:49.698570   15606 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 10:38:49.698615   15606 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 10:38:49.735897   15606 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0422 10:38:49.735989   15606 ssh_runner.go:195] Run: which lz4
	I0422 10:38:49.740620   15606 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0422 10:38:49.745386   15606 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0422 10:38:49.745409   15606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0422 10:38:51.292185   15606 crio.go:462] duration metric: took 1.551598233s to copy over tarball
	I0422 10:38:51.292256   15606 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0422 10:38:53.918826   15606 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.626545528s)
	I0422 10:38:53.918856   15606 crio.go:469] duration metric: took 2.626642493s to extract the tarball
	I0422 10:38:53.918863   15606 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0422 10:38:53.957426   15606 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 10:38:54.000505   15606 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 10:38:54.000527   15606 cache_images.go:84] Images are preloaded, skipping loading
	I0422 10:38:54.000534   15606 kubeadm.go:928] updating node { 192.168.39.194 8443 v1.30.0 crio true true} ...
	I0422 10:38:54.000629   15606 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-649657 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-649657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 10:38:54.000692   15606 ssh_runner.go:195] Run: crio config
	I0422 10:38:54.050754   15606 cni.go:84] Creating CNI manager for ""
	I0422 10:38:54.050777   15606 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 10:38:54.050789   15606 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 10:38:54.050809   15606 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.194 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-649657 NodeName:addons-649657 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.194"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.194 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 10:38:54.050957   15606 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.194
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-649657"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.194
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.194"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 10:38:54.051033   15606 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 10:38:54.062581   15606 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 10:38:54.062650   15606 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 10:38:54.073389   15606 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0422 10:38:54.091376   15606 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 10:38:54.108950   15606 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0422 10:38:54.126873   15606 ssh_runner.go:195] Run: grep 192.168.39.194	control-plane.minikube.internal$ /etc/hosts
	I0422 10:38:54.130843   15606 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.194	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 10:38:54.144219   15606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 10:38:54.286714   15606 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 10:38:54.305842   15606 certs.go:68] Setting up /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657 for IP: 192.168.39.194
	I0422 10:38:54.305865   15606 certs.go:194] generating shared ca certs ...
	I0422 10:38:54.305879   15606 certs.go:226] acquiring lock for ca certs: {Name:mk0b77082b88c771d0b00be5267ca31dfee6f85a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 10:38:54.306016   15606 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key
	I0422 10:38:54.482881   15606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt ...
	I0422 10:38:54.482905   15606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt: {Name:mk573d0df2447a344243cd0320bc02744b0a0cb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 10:38:54.483060   15606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key ...
	I0422 10:38:54.483079   15606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key: {Name:mkbba892ad24803d33bdd9f0663ff134beb893a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 10:38:54.483146   15606 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key
	I0422 10:38:54.663136   15606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.crt ...
	I0422 10:38:54.663164   15606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.crt: {Name:mkfc6c26312d3b3e9e186927f92c57740e56d2fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 10:38:54.663310   15606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key ...
	I0422 10:38:54.663320   15606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key: {Name:mkcccec01632708a58b44c2b15326f02db98e409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 10:38:54.663420   15606 certs.go:256] generating profile certs ...
	I0422 10:38:54.663476   15606 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.key
	I0422 10:38:54.663490   15606 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt with IP's: []
	I0422 10:38:54.798329   15606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt ...
	I0422 10:38:54.798355   15606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: {Name:mkf225d80c3cb066317ff54ed4b5f84c6c5ea81f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 10:38:54.798496   15606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.key ...
	I0422 10:38:54.798507   15606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.key: {Name:mke8c68bb636e010b3bca0f2b152cfad1bee3b99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 10:38:54.798572   15606 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/apiserver.key.1a0dc645
	I0422 10:38:54.798588   15606 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/apiserver.crt.1a0dc645 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.194]
	I0422 10:38:55.008743   15606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/apiserver.crt.1a0dc645 ...
	I0422 10:38:55.008787   15606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/apiserver.crt.1a0dc645: {Name:mk4649aee81f9c78b4e81912b66088f7f2da2da0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 10:38:55.008927   15606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/apiserver.key.1a0dc645 ...
	I0422 10:38:55.008942   15606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/apiserver.key.1a0dc645: {Name:mk1b3e2578f7d6e80ed5a43ab9a055fbdd305496 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 10:38:55.009011   15606 certs.go:381] copying /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/apiserver.crt.1a0dc645 -> /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/apiserver.crt
	I0422 10:38:55.009101   15606 certs.go:385] copying /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/apiserver.key.1a0dc645 -> /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/apiserver.key
	I0422 10:38:55.009149   15606 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/proxy-client.key
	I0422 10:38:55.009166   15606 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/proxy-client.crt with IP's: []
	I0422 10:38:55.675924   15606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/proxy-client.crt ...
	I0422 10:38:55.675951   15606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/proxy-client.crt: {Name:mk611730275760f07d3caabedff965afa7b5b867 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 10:38:55.676102   15606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/proxy-client.key ...
	I0422 10:38:55.676113   15606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/proxy-client.key: {Name:mkcffa60ac09156fa9204336a51337aef6b00343 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 10:38:55.676261   15606 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem (1679 bytes)
	I0422 10:38:55.676292   15606 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem (1078 bytes)
	I0422 10:38:55.676316   15606 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem (1123 bytes)
	I0422 10:38:55.676338   15606 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem (1679 bytes)
	I0422 10:38:55.676926   15606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 10:38:55.705729   15606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 10:38:55.733859   15606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 10:38:55.761212   15606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0422 10:38:55.788128   15606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0422 10:38:55.814678   15606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0422 10:38:55.842103   15606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 10:38:55.870019   15606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 10:38:55.900081   15606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 10:38:55.930494   15606 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 10:38:55.948334   15606 ssh_runner.go:195] Run: openssl version
	I0422 10:38:55.954476   15606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 10:38:55.967462   15606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 10:38:55.972388   15606 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0422 10:38:55.972453   15606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 10:38:55.978653   15606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 10:38:55.990570   15606 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 10:38:55.995121   15606 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0422 10:38:55.995176   15606 kubeadm.go:391] StartCluster: {Name:addons-649657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-649657 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 10:38:55.995253   15606 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 10:38:55.995311   15606 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 10:38:56.034153   15606 cri.go:89] found id: ""
	I0422 10:38:56.034211   15606 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0422 10:38:56.044948   15606 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 10:38:56.055359   15606 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 10:38:56.065419   15606 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 10:38:56.065443   15606 kubeadm.go:156] found existing configuration files:
	
	I0422 10:38:56.065483   15606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 10:38:56.074920   15606 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 10:38:56.074982   15606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 10:38:56.084874   15606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 10:38:56.094285   15606 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 10:38:56.094339   15606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 10:38:56.104189   15606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 10:38:56.113912   15606 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 10:38:56.113971   15606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 10:38:56.123901   15606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 10:38:56.133270   15606 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 10:38:56.133336   15606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 10:38:56.143204   15606 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 10:38:56.316983   15606 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 10:39:06.100464   15606 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0422 10:39:06.100527   15606 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 10:39:06.100620   15606 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 10:39:06.100736   15606 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 10:39:06.100862   15606 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 10:39:06.100973   15606 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 10:39:06.102612   15606 out.go:204]   - Generating certificates and keys ...
	I0422 10:39:06.102701   15606 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 10:39:06.102775   15606 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 10:39:06.102858   15606 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0422 10:39:06.102937   15606 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0422 10:39:06.103029   15606 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0422 10:39:06.103108   15606 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0422 10:39:06.103214   15606 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0422 10:39:06.103383   15606 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-649657 localhost] and IPs [192.168.39.194 127.0.0.1 ::1]
	I0422 10:39:06.103473   15606 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0422 10:39:06.103630   15606 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-649657 localhost] and IPs [192.168.39.194 127.0.0.1 ::1]
	I0422 10:39:06.103720   15606 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0422 10:39:06.103803   15606 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0422 10:39:06.103856   15606 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0422 10:39:06.103906   15606 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 10:39:06.103950   15606 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 10:39:06.104018   15606 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0422 10:39:06.104106   15606 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 10:39:06.104193   15606 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 10:39:06.104276   15606 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 10:39:06.104385   15606 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 10:39:06.104476   15606 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 10:39:06.105991   15606 out.go:204]   - Booting up control plane ...
	I0422 10:39:06.106087   15606 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 10:39:06.106168   15606 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 10:39:06.106246   15606 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 10:39:06.106370   15606 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 10:39:06.106477   15606 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 10:39:06.106534   15606 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 10:39:06.106671   15606 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0422 10:39:06.106747   15606 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0422 10:39:06.106801   15606 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001813271s
	I0422 10:39:06.106863   15606 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0422 10:39:06.106914   15606 kubeadm.go:309] [api-check] The API server is healthy after 5.002917478s
	I0422 10:39:06.107009   15606 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0422 10:39:06.107112   15606 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0422 10:39:06.107164   15606 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0422 10:39:06.107338   15606 kubeadm.go:309] [mark-control-plane] Marking the node addons-649657 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0422 10:39:06.107414   15606 kubeadm.go:309] [bootstrap-token] Using token: q8pyvi.q9qr6sp0xqf6hnwc
	I0422 10:39:06.109148   15606 out.go:204]   - Configuring RBAC rules ...
	I0422 10:39:06.109267   15606 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0422 10:39:06.109357   15606 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0422 10:39:06.109516   15606 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0422 10:39:06.109629   15606 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0422 10:39:06.109727   15606 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0422 10:39:06.109821   15606 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0422 10:39:06.109953   15606 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0422 10:39:06.110016   15606 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0422 10:39:06.110088   15606 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0422 10:39:06.110099   15606 kubeadm.go:309] 
	I0422 10:39:06.110182   15606 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0422 10:39:06.110193   15606 kubeadm.go:309] 
	I0422 10:39:06.110289   15606 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0422 10:39:06.110301   15606 kubeadm.go:309] 
	I0422 10:39:06.110351   15606 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0422 10:39:06.110412   15606 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0422 10:39:06.110458   15606 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0422 10:39:06.110464   15606 kubeadm.go:309] 
	I0422 10:39:06.110512   15606 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0422 10:39:06.110518   15606 kubeadm.go:309] 
	I0422 10:39:06.110561   15606 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0422 10:39:06.110567   15606 kubeadm.go:309] 
	I0422 10:39:06.110620   15606 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0422 10:39:06.110739   15606 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0422 10:39:06.110848   15606 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0422 10:39:06.110857   15606 kubeadm.go:309] 
	I0422 10:39:06.110963   15606 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0422 10:39:06.111036   15606 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0422 10:39:06.111042   15606 kubeadm.go:309] 
	I0422 10:39:06.111121   15606 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token q8pyvi.q9qr6sp0xqf6hnwc \
	I0422 10:39:06.111207   15606 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9af0990320e7d045d6d22d7e7276e7343f7b0ca44ae792f4916b4a79d803646f \
	I0422 10:39:06.111227   15606 kubeadm.go:309] 	--control-plane 
	I0422 10:39:06.111233   15606 kubeadm.go:309] 
	I0422 10:39:06.111318   15606 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0422 10:39:06.111328   15606 kubeadm.go:309] 
	I0422 10:39:06.111410   15606 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token q8pyvi.q9qr6sp0xqf6hnwc \
	I0422 10:39:06.111542   15606 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9af0990320e7d045d6d22d7e7276e7343f7b0ca44ae792f4916b4a79d803646f 
	I0422 10:39:06.111555   15606 cni.go:84] Creating CNI manager for ""
	I0422 10:39:06.111564   15606 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 10:39:06.113398   15606 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 10:39:06.114926   15606 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 10:39:06.128525   15606 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 10:39:06.151386   15606 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 10:39:06.151454   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:06.151521   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-649657 minikube.k8s.io/updated_at=2024_04_22T10_39_06_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437 minikube.k8s.io/name=addons-649657 minikube.k8s.io/primary=true
	I0422 10:39:06.258414   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:06.327385   15606 ops.go:34] apiserver oom_adj: -16
	I0422 10:39:06.758506   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:07.258612   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:07.759209   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:08.258801   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:08.759123   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:09.259055   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:09.758838   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:10.258815   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:10.759281   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:11.258560   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:11.759140   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:12.259255   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:12.759072   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:13.258551   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:13.758787   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:14.258504   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:14.758801   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:15.259133   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:15.758741   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:16.259368   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:16.758500   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:17.259193   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:17.758622   15606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 10:39:17.845699   15606 kubeadm.go:1107] duration metric: took 11.694314812s to wait for elevateKubeSystemPrivileges
	W0422 10:39:17.845757   15606 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0422 10:39:17.845766   15606 kubeadm.go:393] duration metric: took 21.85059615s to StartCluster
	I0422 10:39:17.845786   15606 settings.go:142] acquiring lock: {Name:mkd680667f0df4166491741d55b55ac111bb0138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 10:39:17.845938   15606 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18711-7633/kubeconfig
	I0422 10:39:17.846325   15606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/kubeconfig: {Name:mkee6de4c6906fe5621e8aeac858a93219648db5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 10:39:17.846539   15606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0422 10:39:17.846545   15606 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 10:39:17.848521   15606 out.go:177] * Verifying Kubernetes components...
	I0422 10:39:17.846594   15606 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0422 10:39:17.846754   15606 config.go:182] Loaded profile config "addons-649657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 10:39:17.849735   15606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 10:39:17.849757   15606 addons.go:69] Setting yakd=true in profile "addons-649657"
	I0422 10:39:17.849767   15606 addons.go:69] Setting cloud-spanner=true in profile "addons-649657"
	I0422 10:39:17.849789   15606 addons.go:234] Setting addon yakd=true in "addons-649657"
	I0422 10:39:17.849801   15606 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-649657"
	I0422 10:39:17.849806   15606 addons.go:234] Setting addon cloud-spanner=true in "addons-649657"
	I0422 10:39:17.849808   15606 addons.go:69] Setting ingress-dns=true in profile "addons-649657"
	I0422 10:39:17.849810   15606 addons.go:69] Setting ingress=true in profile "addons-649657"
	I0422 10:39:17.849825   15606 host.go:66] Checking if "addons-649657" exists ...
	I0422 10:39:17.849831   15606 addons.go:234] Setting addon ingress-dns=true in "addons-649657"
	I0422 10:39:17.849837   15606 addons.go:234] Setting addon ingress=true in "addons-649657"
	I0422 10:39:17.849843   15606 addons.go:69] Setting default-storageclass=true in profile "addons-649657"
	I0422 10:39:17.849850   15606 addons.go:69] Setting helm-tiller=true in profile "addons-649657"
	I0422 10:39:17.849862   15606 host.go:66] Checking if "addons-649657" exists ...
	I0422 10:39:17.849864   15606 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-649657"
	I0422 10:39:17.849867   15606 addons.go:234] Setting addon helm-tiller=true in "addons-649657"
	I0422 10:39:17.849872   15606 host.go:66] Checking if "addons-649657" exists ...
	I0422 10:39:17.849886   15606 host.go:66] Checking if "addons-649657" exists ...
	I0422 10:39:17.850199   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.849801   15606 addons.go:69] Setting metrics-server=true in profile "addons-649657"
	I0422 10:39:17.850224   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.850234   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.850238   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.850241   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.850245   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.850245   15606 addons.go:69] Setting inspektor-gadget=true in profile "addons-649657"
	I0422 10:39:17.849838   15606 host.go:66] Checking if "addons-649657" exists ...
	I0422 10:39:17.850263   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.850251   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.850276   15606 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-649657"
	I0422 10:39:17.850226   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.850339   15606 addons.go:69] Setting registry=true in profile "addons-649657"
	I0422 10:39:17.850342   15606 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-649657"
	I0422 10:39:17.850265   15606 addons.go:234] Setting addon inspektor-gadget=true in "addons-649657"
	I0422 10:39:17.849844   15606 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-649657"
	I0422 10:39:17.850359   15606 addons.go:69] Setting volumesnapshots=true in profile "addons-649657"
	I0422 10:39:17.850357   15606 addons.go:69] Setting storage-provisioner=true in profile "addons-649657"
	I0422 10:39:17.850368   15606 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-649657"
	I0422 10:39:17.850368   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.850375   15606 addons.go:234] Setting addon volumesnapshots=true in "addons-649657"
	I0422 10:39:17.849757   15606 addons.go:69] Setting gcp-auth=true in profile "addons-649657"
	I0422 10:39:17.850384   15606 addons.go:234] Setting addon storage-provisioner=true in "addons-649657"
	I0422 10:39:17.850238   15606 addons.go:234] Setting addon metrics-server=true in "addons-649657"
	I0422 10:39:17.850392   15606 mustload.go:65] Loading cluster: addons-649657
	I0422 10:39:17.850360   15606 addons.go:234] Setting addon registry=true in "addons-649657"
	I0422 10:39:17.850385   15606 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-649657"
	I0422 10:39:17.850562   15606 host.go:66] Checking if "addons-649657" exists ...
	I0422 10:39:17.850572   15606 host.go:66] Checking if "addons-649657" exists ...
	I0422 10:39:17.850588   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.850607   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.850633   15606 host.go:66] Checking if "addons-649657" exists ...
	I0422 10:39:17.850674   15606 host.go:66] Checking if "addons-649657" exists ...
	I0422 10:39:17.850565   15606 host.go:66] Checking if "addons-649657" exists ...
	I0422 10:39:17.851115   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.851144   15606 config.go:182] Loaded profile config "addons-649657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 10:39:17.851166   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.851190   15606 host.go:66] Checking if "addons-649657" exists ...
	I0422 10:39:17.851213   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.851239   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.851262   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.851245   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.851316   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.851218   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.851350   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.851319   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.851401   15606 host.go:66] Checking if "addons-649657" exists ...
	I0422 10:39:17.851520   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.851192   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.851688   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.851570   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.851815   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.851879   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.851915   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.851666   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.867526   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40427
	I0422 10:39:17.870809   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43423
	I0422 10:39:17.877088   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40447
	I0422 10:39:17.877144   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44619
	I0422 10:39:17.877956   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.878077   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.878141   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.878205   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.879764   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.879783   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.879918   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.879930   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.880050   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.880063   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.880188   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.880202   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.881595   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.881637   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.881599   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.881710   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.882214   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.882268   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.882538   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.882556   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.883034   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.883053   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.882219   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.883250   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.906369   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34947
	I0422 10:39:17.906779   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.908847   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36243
	I0422 10:39:17.909274   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.909292   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.909733   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.910062   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.910236   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44113
	I0422 10:39:17.910761   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.910793   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.911009   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.911172   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.911183   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.911519   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.912001   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.912022   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.918022   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38705
	I0422 10:39:17.918123   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.918143   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.918202   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37223
	I0422 10:39:17.918551   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.919128   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.919150   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.919208   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.919397   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:17.919458   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.919889   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39239
	I0422 10:39:17.920071   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.920082   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.920645   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.920680   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44441
	I0422 10:39:17.920710   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.921229   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.921392   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.921404   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.921768   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.921781   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.921832   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.921841   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.921861   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.922161   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.922311   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:17.922511   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.922529   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.922538   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39401
	I0422 10:39:17.923342   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.923409   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:39:17.923451   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39629
	I0422 10:39:17.925844   15606 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0422 10:39:17.927368   15606 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0422 10:39:17.927392   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0422 10:39:17.927413   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:39:17.925883   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.925823   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.927553   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.925456   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.927510   15606 addons.go:234] Setting addon default-storageclass=true in "addons-649657"
	I0422 10:39:17.927662   15606 host.go:66] Checking if "addons-649657" exists ...
	I0422 10:39:17.928000   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.928035   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.928852   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.928879   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.929018   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.929037   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.929451   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.929517   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.929732   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:17.931243   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:39:17.933109   15606 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0422 10:39:17.933224   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:17.933259   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.933894   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:39:17.934561   15606 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0422 10:39:17.934655   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.935894   15606 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0422 10:39:17.937376   15606 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0422 10:39:17.937395   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0422 10:39:17.937412   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:39:17.935920   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46093
	I0422 10:39:17.934760   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:39:17.934676   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:39:17.937595   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:17.938280   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:39:17.938492   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:39:17.939085   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.939625   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.939642   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.939991   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.940257   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:17.940506   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:17.941209   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:39:17.941225   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:39:17.941247   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:17.941390   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:39:17.941512   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:39:17.941623   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:39:17.942535   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38427
	I0422 10:39:17.943427   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.944693   15606 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-649657"
	I0422 10:39:17.944737   15606 host.go:66] Checking if "addons-649657" exists ...
	I0422 10:39:17.945126   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.945159   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.945364   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43735
	I0422 10:39:17.945767   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.945784   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.946211   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.946799   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.946832   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.947060   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.947077   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33721
	I0422 10:39:17.947418   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.947492   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.947504   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.948290   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.948474   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:17.949545   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35923
	I0422 10:39:17.950032   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.950047   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.950124   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.950360   15606 host.go:66] Checking if "addons-649657" exists ...
	I0422 10:39:17.950755   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.950790   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.951068   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.951081   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.951479   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.952018   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.952061   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.952262   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.952446   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:17.954411   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:39:17.956873   15606 out.go:177]   - Using image docker.io/registry:2.8.3
	I0422 10:39:17.958348   15606 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0422 10:39:17.959549   15606 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0422 10:39:17.959571   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0422 10:39:17.959594   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:39:17.963397   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:17.963787   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:39:17.963811   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:17.964058   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38617
	I0422 10:39:17.964207   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:39:17.964457   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:39:17.964539   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.964645   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:39:17.964916   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:39:17.965319   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.965339   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.966377   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.966544   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:17.968085   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:39:17.970030   15606 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0422 10:39:17.971484   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45461
	I0422 10:39:17.971493   15606 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0422 10:39:17.971510   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0422 10:39:17.971530   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:39:17.971610   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45485
	I0422 10:39:17.972347   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.974863   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:17.975386   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:39:17.975409   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:17.975714   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.975729   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.975798   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:39:17.975845   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.976076   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.976125   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:39:17.976231   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:17.976277   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:39:17.976419   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:39:17.977187   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40319
	I0422 10:39:17.977993   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.978012   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.978257   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.978338   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:39:17.978554   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.980499   15606 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0422 10:39:17.978719   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:17.979538   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.980035   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42665
	I0422 10:39:17.981751   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43855
	I0422 10:39:17.981830   15606 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0422 10:39:17.981843   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0422 10:39:17.981861   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:39:17.981920   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.981927   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35651
	I0422 10:39:17.982143   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.982658   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.982729   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35751
	I0422 10:39:17.982831   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.983098   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.983560   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.983579   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.983638   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.983746   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.983757   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.983882   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.983892   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.984046   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.984064   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.984513   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.984516   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.984567   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.984545   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35031
	I0422 10:39:17.984985   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.985024   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.985766   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:17.985826   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:17.985867   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.985927   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.986080   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:17.986116   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:17.986729   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.986747   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:17.987116   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:17.987296   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:17.987790   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:17.987891   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:39:17.988088   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:39:17.989534   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:39:17.989594   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:39:17.989608   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:17.989634   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:39:17.989675   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:39:17.989712   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:39:17.989873   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:39:17.991637   15606 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0422 10:39:17.992801   15606 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0422 10:39:17.989997   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:39:17.990074   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35593
	I0422 10:39:17.990587   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37483
	I0422 10:39:17.991021   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:39:17.992765   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44621
	I0422 10:39:17.995166   15606 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0422 10:39:17.995525   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.995920   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.996350   15606 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0422 10:39:17.996730   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:17.997805   15606 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0422 10:39:17.998377   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:17.999145   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0422 10:39:17.999148   15606 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0422 10:39:17.999569   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:18.000343   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:18.000361   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:18.000393   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:39:18.002095   15606 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0422 10:39:18.000409   15606 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0422 10:39:17.999709   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:18.000705   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:18.000723   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:18.002130   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0422 10:39:18.002147   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:18.003950   15606 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0422 10:39:18.004146   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:18.005070   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:39:18.004170   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:18.004192   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.006531   15606 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0422 10:39:18.005136   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:39:18.005173   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0422 10:39:18.004694   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:39:18.005889   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:18.006135   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43893
	I0422 10:39:18.006970   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:39:18.007182   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:39:18.009101   15606 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0422 10:39:18.010356   15606 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0422 10:39:18.008155   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:39:18.008163   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.008423   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:39:18.008446   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.008457   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:18.008721   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:18.008894   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:39:18.010420   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35887
	I0422 10:39:18.011677   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:39:18.012276   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:39:18.012903   15606 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0422 10:39:18.013276   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:39:18.014251   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.013286   15606 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0422 10:39:18.015681   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:39:18.017143   15606 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0422 10:39:18.017166   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0422 10:39:18.017182   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:39:18.013661   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:18.017203   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:18.014009   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:18.014294   15606 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0422 10:39:18.013493   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:39:18.014435   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:39:18.015047   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.015795   15606 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0422 10:39:18.015839   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:39:18.017593   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:18.018533   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:18.018670   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:39:18.019753   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:39:18.019773   15606 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 10:39:18.019825   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0422 10:39:18.020076   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:18.020080   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:39:18.021120   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:18.020232   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.021143   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.022498   15606 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 10:39:18.022513   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0422 10:39:18.021111   15606 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0422 10:39:18.023851   15606 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0422 10:39:18.023869   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0422 10:39:18.023884   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:39:18.022530   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:39:18.020859   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:39:18.021168   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:39:18.021347   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:39:18.021376   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:39:18.021523   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:18.022592   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:39:18.021163   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:39:18.024183   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.025795   15606 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0422 10:39:18.024688   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:39:18.025260   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:18.028635   15606 out.go:177]   - Using image docker.io/busybox:stable
	I0422 10:39:18.027603   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.027668   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:39:18.028688   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:39:18.028254   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.028328   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:39:18.028712   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.030138   15606 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0422 10:39:18.030154   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0422 10:39:18.030171   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:39:18.028941   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:39:18.030201   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:39:18.029221   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:39:18.029712   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.029842   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:39:18.029996   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:39:18.030257   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.030318   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:39:18.030334   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.030418   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:39:18.030599   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:39:18.030614   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:39:18.030679   15606 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0422 10:39:18.030691   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0422 10:39:18.030708   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:39:18.030742   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:39:18.030882   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:39:18.030899   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:39:18.030937   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:39:18.031283   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:39:18.031454   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	W0422 10:39:18.032687   15606 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:37526->192.168.39.194:22: read: connection reset by peer
	I0422 10:39:18.032714   15606 retry.go:31] will retry after 324.459983ms: ssh: handshake failed: read tcp 192.168.39.1:37526->192.168.39.194:22: read: connection reset by peer
	I0422 10:39:18.033688   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.033714   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.034002   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:39:18.034020   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.034080   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:39:18.034099   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:18.034130   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:39:18.034246   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:39:18.034299   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:39:18.034342   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:39:18.034425   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:39:18.034470   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:39:18.034708   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:39:18.034859   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	W0422 10:39:18.035321   15606 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0422 10:39:18.035339   15606 retry.go:31] will retry after 285.480819ms: ssh: handshake failed: EOF
	I0422 10:39:18.268023   15606 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 10:39:18.268038   15606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0422 10:39:18.296721   15606 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0422 10:39:18.296739   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0422 10:39:18.324868   15606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0422 10:39:18.359660   15606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0422 10:39:18.365196   15606 node_ready.go:35] waiting up to 6m0s for node "addons-649657" to be "Ready" ...
	I0422 10:39:18.368087   15606 node_ready.go:49] node "addons-649657" has status "Ready":"True"
	I0422 10:39:18.368105   15606 node_ready.go:38] duration metric: took 2.88419ms for node "addons-649657" to be "Ready" ...
	I0422 10:39:18.368113   15606 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 10:39:18.373521   15606 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-649657" in "kube-system" namespace to be "Ready" ...
	I0422 10:39:18.379624   15606 pod_ready.go:92] pod "etcd-addons-649657" in "kube-system" namespace has status "Ready":"True"
	I0422 10:39:18.379645   15606 pod_ready.go:81] duration metric: took 6.103757ms for pod "etcd-addons-649657" in "kube-system" namespace to be "Ready" ...
	I0422 10:39:18.379653   15606 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-649657" in "kube-system" namespace to be "Ready" ...
	I0422 10:39:18.384269   15606 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0422 10:39:18.384287   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0422 10:39:18.390003   15606 pod_ready.go:92] pod "kube-apiserver-addons-649657" in "kube-system" namespace has status "Ready":"True"
	I0422 10:39:18.390022   15606 pod_ready.go:81] duration metric: took 10.364034ms for pod "kube-apiserver-addons-649657" in "kube-system" namespace to be "Ready" ...
	I0422 10:39:18.390035   15606 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-649657" in "kube-system" namespace to be "Ready" ...
	I0422 10:39:18.401422   15606 pod_ready.go:92] pod "kube-controller-manager-addons-649657" in "kube-system" namespace has status "Ready":"True"
	I0422 10:39:18.401441   15606 pod_ready.go:81] duration metric: took 11.400738ms for pod "kube-controller-manager-addons-649657" in "kube-system" namespace to be "Ready" ...
	I0422 10:39:18.401450   15606 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-649657" in "kube-system" namespace to be "Ready" ...
	I0422 10:39:18.414766   15606 pod_ready.go:92] pod "kube-scheduler-addons-649657" in "kube-system" namespace has status "Ready":"True"
	I0422 10:39:18.414792   15606 pod_ready.go:81] duration metric: took 13.3369ms for pod "kube-scheduler-addons-649657" in "kube-system" namespace to be "Ready" ...
	I0422 10:39:18.414800   15606 pod_ready.go:38] duration metric: took 46.677237ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 10:39:18.414813   15606 api_server.go:52] waiting for apiserver process to appear ...
	I0422 10:39:18.414854   15606 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 10:39:18.415929   15606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 10:39:18.418845   15606 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0422 10:39:18.418870   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0422 10:39:18.463448   15606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0422 10:39:18.507667   15606 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0422 10:39:18.507690   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0422 10:39:18.597821   15606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0422 10:39:18.600736   15606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0422 10:39:18.603482   15606 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0422 10:39:18.603501   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0422 10:39:18.675658   15606 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0422 10:39:18.675687   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0422 10:39:18.678536   15606 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0422 10:39:18.678560   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0422 10:39:18.740553   15606 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0422 10:39:18.740582   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0422 10:39:18.821647   15606 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 10:39:18.821676   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0422 10:39:18.856356   15606 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0422 10:39:18.856382   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0422 10:39:18.866663   15606 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0422 10:39:18.866691   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0422 10:39:18.934541   15606 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0422 10:39:18.934565   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0422 10:39:18.964434   15606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0422 10:39:19.040346   15606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 10:39:19.082294   15606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0422 10:39:19.188453   15606 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0422 10:39:19.188480   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0422 10:39:19.217582   15606 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0422 10:39:19.217613   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0422 10:39:19.225114   15606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 10:39:19.231792   15606 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0422 10:39:19.231819   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0422 10:39:19.253914   15606 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0422 10:39:19.253937   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0422 10:39:19.587300   15606 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0422 10:39:19.587326   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0422 10:39:19.639707   15606 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0422 10:39:19.639730   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0422 10:39:19.644485   15606 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0422 10:39:19.644505   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0422 10:39:19.719319   15606 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0422 10:39:19.719343   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0422 10:39:19.884581   15606 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0422 10:39:19.884607   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0422 10:39:19.993721   15606 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0422 10:39:19.993755   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0422 10:39:20.065847   15606 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0422 10:39:20.065876   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0422 10:39:20.313543   15606 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0422 10:39:20.313564   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0422 10:39:20.315885   15606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0422 10:39:20.319011   15606 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0422 10:39:20.319033   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0422 10:39:20.400315   15606 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0422 10:39:20.400346   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0422 10:39:20.732222   15606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0422 10:39:20.733654   15606 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0422 10:39:20.733671   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0422 10:39:20.758956   15606 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0422 10:39:20.758981   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0422 10:39:20.898686   15606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.57378809s)
	I0422 10:39:20.898740   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:20.898748   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:20.898685   15606 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.630617884s)
	I0422 10:39:20.898808   15606 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0422 10:39:20.899033   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:20.899096   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:20.899106   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:20.899121   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:20.899130   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:20.899354   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:20.899394   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:20.899418   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:21.189502   15606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0422 10:39:21.221478   15606 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0422 10:39:21.221506   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0422 10:39:21.402736   15606 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-649657" context rescaled to 1 replicas
	I0422 10:39:21.611303   15606 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0422 10:39:21.611328   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0422 10:39:21.851268   15606 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0422 10:39:21.851299   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0422 10:39:21.888075   15606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.528376597s)
	I0422 10:39:21.888131   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:21.888143   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:21.888168   15606 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.473295021s)
	I0422 10:39:21.888201   15606 api_server.go:72] duration metric: took 4.0416321s to wait for apiserver process to appear ...
	I0422 10:39:21.888212   15606 api_server.go:88] waiting for apiserver healthz status ...
	I0422 10:39:21.888232   15606 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I0422 10:39:21.888431   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:21.888450   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:21.888462   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:21.888478   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:21.888486   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:21.888700   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:21.888726   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:21.916570   15606 api_server.go:279] https://192.168.39.194:8443/healthz returned 200:
	ok
	I0422 10:39:21.920624   15606 api_server.go:141] control plane version: v1.30.0
	I0422 10:39:21.920647   15606 api_server.go:131] duration metric: took 32.4294ms to wait for apiserver health ...
	I0422 10:39:21.920655   15606 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 10:39:21.933513   15606 system_pods.go:59] 8 kube-system pods found
	I0422 10:39:21.933550   15606 system_pods.go:61] "coredns-7db6d8ff4d-2mxqp" [aa2ffe62-c568-4ca9-b23a-2976185dc0c0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 10:39:21.933560   15606 system_pods.go:61] "coredns-7db6d8ff4d-tlwhf" [8980ac23-fb3e-457f-b6bb-b238465edfbd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 10:39:21.933567   15606 system_pods.go:61] "etcd-addons-649657" [32390e86-daf9-4070-978d-3fe7fe2f42ca] Running
	I0422 10:39:21.933574   15606 system_pods.go:61] "kube-apiserver-addons-649657" [a13ce6af-f99b-4f74-beb5-e99cb393909e] Running
	I0422 10:39:21.933579   15606 system_pods.go:61] "kube-controller-manager-addons-649657" [add2d793-686a-4b81-8b08-fc8d4dd539bb] Running
	I0422 10:39:21.933592   15606 system_pods.go:61] "kube-proxy-hlgg9" [478bfbcb-c8d1-4a0b-b13c-84e8892d1d3e] Running
	I0422 10:39:21.933597   15606 system_pods.go:61] "kube-scheduler-addons-649657" [d9bdd843-f8d4-45a9-977d-bed508686f8f] Running
	I0422 10:39:21.933606   15606 system_pods.go:61] "nvidia-device-plugin-daemonset-w4vxc" [3bfb0bd5-3242-4f72-9f7c-0c79543badd2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0422 10:39:21.933619   15606 system_pods.go:74] duration metric: took 12.958213ms to wait for pod list to return data ...
	I0422 10:39:21.933632   15606 default_sa.go:34] waiting for default service account to be created ...
	I0422 10:39:21.948729   15606 default_sa.go:45] found service account: "default"
	I0422 10:39:21.948758   15606 default_sa.go:55] duration metric: took 15.115185ms for default service account to be created ...
	I0422 10:39:21.948769   15606 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 10:39:21.954167   15606 system_pods.go:86] 8 kube-system pods found
	I0422 10:39:21.954195   15606 system_pods.go:89] "coredns-7db6d8ff4d-2mxqp" [aa2ffe62-c568-4ca9-b23a-2976185dc0c0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 10:39:21.954204   15606 system_pods.go:89] "coredns-7db6d8ff4d-tlwhf" [8980ac23-fb3e-457f-b6bb-b238465edfbd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 10:39:21.954210   15606 system_pods.go:89] "etcd-addons-649657" [32390e86-daf9-4070-978d-3fe7fe2f42ca] Running
	I0422 10:39:21.954214   15606 system_pods.go:89] "kube-apiserver-addons-649657" [a13ce6af-f99b-4f74-beb5-e99cb393909e] Running
	I0422 10:39:21.954218   15606 system_pods.go:89] "kube-controller-manager-addons-649657" [add2d793-686a-4b81-8b08-fc8d4dd539bb] Running
	I0422 10:39:21.954222   15606 system_pods.go:89] "kube-proxy-hlgg9" [478bfbcb-c8d1-4a0b-b13c-84e8892d1d3e] Running
	I0422 10:39:21.954226   15606 system_pods.go:89] "kube-scheduler-addons-649657" [d9bdd843-f8d4-45a9-977d-bed508686f8f] Running
	I0422 10:39:21.954231   15606 system_pods.go:89] "nvidia-device-plugin-daemonset-w4vxc" [3bfb0bd5-3242-4f72-9f7c-0c79543badd2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0422 10:39:21.954242   15606 retry.go:31] will retry after 201.780034ms: missing components: kube-dns
	I0422 10:39:22.176121   15606 system_pods.go:86] 9 kube-system pods found
	I0422 10:39:22.176162   15606 system_pods.go:89] "coredns-7db6d8ff4d-2mxqp" [aa2ffe62-c568-4ca9-b23a-2976185dc0c0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 10:39:22.176174   15606 system_pods.go:89] "coredns-7db6d8ff4d-tlwhf" [8980ac23-fb3e-457f-b6bb-b238465edfbd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 10:39:22.176182   15606 system_pods.go:89] "etcd-addons-649657" [32390e86-daf9-4070-978d-3fe7fe2f42ca] Running
	I0422 10:39:22.176192   15606 system_pods.go:89] "kube-apiserver-addons-649657" [a13ce6af-f99b-4f74-beb5-e99cb393909e] Running
	I0422 10:39:22.176198   15606 system_pods.go:89] "kube-controller-manager-addons-649657" [add2d793-686a-4b81-8b08-fc8d4dd539bb] Running
	I0422 10:39:22.176205   15606 system_pods.go:89] "kube-ingress-dns-minikube" [a8f74405-5f73-4306-a4ca-244216a00b42] Pending
	I0422 10:39:22.176210   15606 system_pods.go:89] "kube-proxy-hlgg9" [478bfbcb-c8d1-4a0b-b13c-84e8892d1d3e] Running
	I0422 10:39:22.176219   15606 system_pods.go:89] "kube-scheduler-addons-649657" [d9bdd843-f8d4-45a9-977d-bed508686f8f] Running
	I0422 10:39:22.176227   15606 system_pods.go:89] "nvidia-device-plugin-daemonset-w4vxc" [3bfb0bd5-3242-4f72-9f7c-0c79543badd2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0422 10:39:22.176250   15606 retry.go:31] will retry after 242.480405ms: missing components: kube-dns
	I0422 10:39:22.486107   15606 system_pods.go:86] 9 kube-system pods found
	I0422 10:39:22.486145   15606 system_pods.go:89] "coredns-7db6d8ff4d-2mxqp" [aa2ffe62-c568-4ca9-b23a-2976185dc0c0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 10:39:22.486156   15606 system_pods.go:89] "coredns-7db6d8ff4d-tlwhf" [8980ac23-fb3e-457f-b6bb-b238465edfbd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 10:39:22.486164   15606 system_pods.go:89] "etcd-addons-649657" [32390e86-daf9-4070-978d-3fe7fe2f42ca] Running
	I0422 10:39:22.486173   15606 system_pods.go:89] "kube-apiserver-addons-649657" [a13ce6af-f99b-4f74-beb5-e99cb393909e] Running
	I0422 10:39:22.486180   15606 system_pods.go:89] "kube-controller-manager-addons-649657" [add2d793-686a-4b81-8b08-fc8d4dd539bb] Running
	I0422 10:39:22.486190   15606 system_pods.go:89] "kube-ingress-dns-minikube" [a8f74405-5f73-4306-a4ca-244216a00b42] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0422 10:39:22.486196   15606 system_pods.go:89] "kube-proxy-hlgg9" [478bfbcb-c8d1-4a0b-b13c-84e8892d1d3e] Running
	I0422 10:39:22.486203   15606 system_pods.go:89] "kube-scheduler-addons-649657" [d9bdd843-f8d4-45a9-977d-bed508686f8f] Running
	I0422 10:39:22.486217   15606 system_pods.go:89] "nvidia-device-plugin-daemonset-w4vxc" [3bfb0bd5-3242-4f72-9f7c-0c79543badd2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0422 10:39:22.486234   15606 retry.go:31] will retry after 479.404499ms: missing components: kube-dns
	I0422 10:39:22.499410   15606 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0422 10:39:22.499436   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0422 10:39:22.533062   15606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.117095596s)
	I0422 10:39:22.533125   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:22.533137   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:22.533454   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:22.533502   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:22.533522   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:22.533538   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:22.533549   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:22.533813   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:22.533868   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:22.533880   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:22.654717   15606 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0422 10:39:22.654743   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0422 10:39:22.982050   15606 system_pods.go:86] 10 kube-system pods found
	I0422 10:39:22.982083   15606 system_pods.go:89] "coredns-7db6d8ff4d-2mxqp" [aa2ffe62-c568-4ca9-b23a-2976185dc0c0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 10:39:22.982110   15606 system_pods.go:89] "coredns-7db6d8ff4d-tlwhf" [8980ac23-fb3e-457f-b6bb-b238465edfbd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 10:39:22.982121   15606 system_pods.go:89] "etcd-addons-649657" [32390e86-daf9-4070-978d-3fe7fe2f42ca] Running
	I0422 10:39:22.982129   15606 system_pods.go:89] "kube-apiserver-addons-649657" [a13ce6af-f99b-4f74-beb5-e99cb393909e] Running
	I0422 10:39:22.982138   15606 system_pods.go:89] "kube-controller-manager-addons-649657" [add2d793-686a-4b81-8b08-fc8d4dd539bb] Running
	I0422 10:39:22.982151   15606 system_pods.go:89] "kube-ingress-dns-minikube" [a8f74405-5f73-4306-a4ca-244216a00b42] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0422 10:39:22.982167   15606 system_pods.go:89] "kube-proxy-hlgg9" [478bfbcb-c8d1-4a0b-b13c-84e8892d1d3e] Running
	I0422 10:39:22.982175   15606 system_pods.go:89] "kube-scheduler-addons-649657" [d9bdd843-f8d4-45a9-977d-bed508686f8f] Running
	I0422 10:39:22.982185   15606 system_pods.go:89] "nvidia-device-plugin-daemonset-w4vxc" [3bfb0bd5-3242-4f72-9f7c-0c79543badd2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0422 10:39:22.982199   15606 system_pods.go:89] "storage-provisioner" [3f7923bd-3f6b-44d8-846c-ed7eee65a6df] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0422 10:39:22.982221   15606 retry.go:31] will retry after 560.513153ms: missing components: kube-dns
	I0422 10:39:23.029058   15606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0422 10:39:23.706759   15606 system_pods.go:86] 11 kube-system pods found
	I0422 10:39:23.706791   15606 system_pods.go:89] "coredns-7db6d8ff4d-2mxqp" [aa2ffe62-c568-4ca9-b23a-2976185dc0c0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 10:39:23.706798   15606 system_pods.go:89] "coredns-7db6d8ff4d-tlwhf" [8980ac23-fb3e-457f-b6bb-b238465edfbd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 10:39:23.706804   15606 system_pods.go:89] "etcd-addons-649657" [32390e86-daf9-4070-978d-3fe7fe2f42ca] Running
	I0422 10:39:23.706809   15606 system_pods.go:89] "kube-apiserver-addons-649657" [a13ce6af-f99b-4f74-beb5-e99cb393909e] Running
	I0422 10:39:23.706813   15606 system_pods.go:89] "kube-controller-manager-addons-649657" [add2d793-686a-4b81-8b08-fc8d4dd539bb] Running
	I0422 10:39:23.706819   15606 system_pods.go:89] "kube-ingress-dns-minikube" [a8f74405-5f73-4306-a4ca-244216a00b42] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0422 10:39:23.706824   15606 system_pods.go:89] "kube-proxy-hlgg9" [478bfbcb-c8d1-4a0b-b13c-84e8892d1d3e] Running
	I0422 10:39:23.706828   15606 system_pods.go:89] "kube-scheduler-addons-649657" [d9bdd843-f8d4-45a9-977d-bed508686f8f] Running
	I0422 10:39:23.706834   15606 system_pods.go:89] "nvidia-device-plugin-daemonset-w4vxc" [3bfb0bd5-3242-4f72-9f7c-0c79543badd2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0422 10:39:23.706838   15606 system_pods.go:89] "registry-nqc7x" [b64590e0-a02f-45d2-8f1e-198288db17c6] Pending
	I0422 10:39:23.706843   15606 system_pods.go:89] "storage-provisioner" [3f7923bd-3f6b-44d8-846c-ed7eee65a6df] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0422 10:39:23.706855   15606 retry.go:31] will retry after 487.757207ms: missing components: kube-dns
	I0422 10:39:24.299496   15606 system_pods.go:86] 14 kube-system pods found
	I0422 10:39:24.299537   15606 system_pods.go:89] "coredns-7db6d8ff4d-2mxqp" [aa2ffe62-c568-4ca9-b23a-2976185dc0c0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 10:39:24.299548   15606 system_pods.go:89] "coredns-7db6d8ff4d-tlwhf" [8980ac23-fb3e-457f-b6bb-b238465edfbd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 10:39:24.299562   15606 system_pods.go:89] "etcd-addons-649657" [32390e86-daf9-4070-978d-3fe7fe2f42ca] Running
	I0422 10:39:24.299568   15606 system_pods.go:89] "kube-apiserver-addons-649657" [a13ce6af-f99b-4f74-beb5-e99cb393909e] Running
	I0422 10:39:24.299573   15606 system_pods.go:89] "kube-controller-manager-addons-649657" [add2d793-686a-4b81-8b08-fc8d4dd539bb] Running
	I0422 10:39:24.299580   15606 system_pods.go:89] "kube-ingress-dns-minikube" [a8f74405-5f73-4306-a4ca-244216a00b42] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0422 10:39:24.299586   15606 system_pods.go:89] "kube-proxy-hlgg9" [478bfbcb-c8d1-4a0b-b13c-84e8892d1d3e] Running
	I0422 10:39:24.299593   15606 system_pods.go:89] "kube-scheduler-addons-649657" [d9bdd843-f8d4-45a9-977d-bed508686f8f] Running
	I0422 10:39:24.299600   15606 system_pods.go:89] "metrics-server-c59844bb4-phnbq" [ce74ad1e-3a35-470e-962e-901dcdc84a6d] Pending
	I0422 10:39:24.299611   15606 system_pods.go:89] "nvidia-device-plugin-daemonset-w4vxc" [3bfb0bd5-3242-4f72-9f7c-0c79543badd2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0422 10:39:24.299621   15606 system_pods.go:89] "registry-nqc7x" [b64590e0-a02f-45d2-8f1e-198288db17c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0422 10:39:24.299636   15606 system_pods.go:89] "registry-proxy-kvfwc" [8ff782c8-8bc1-4ee5-96c7-36c9b42dd909] Pending
	I0422 10:39:24.299645   15606 system_pods.go:89] "storage-provisioner" [3f7923bd-3f6b-44d8-846c-ed7eee65a6df] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0422 10:39:24.299654   15606 system_pods.go:89] "tiller-deploy-6677d64bcd-6gjgv" [8fff0c69-9c68-4af8-962b-aa26874d6504] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0422 10:39:24.299664   15606 system_pods.go:126] duration metric: took 2.3508754s to wait for k8s-apps to be running ...
	I0422 10:39:24.299676   15606 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 10:39:24.299730   15606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 10:39:25.008742   15606 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0422 10:39:25.008789   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:39:25.012108   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:25.012519   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:39:25.012553   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:25.012754   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:39:25.013004   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:39:25.013183   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:39:25.013396   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:39:25.300400   15606 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0422 10:39:25.363746   15606 addons.go:234] Setting addon gcp-auth=true in "addons-649657"
	I0422 10:39:25.363804   15606 host.go:66] Checking if "addons-649657" exists ...
	I0422 10:39:25.364109   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:25.364136   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:25.378400   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34151
	I0422 10:39:25.378800   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:25.379302   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:25.379335   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:25.379645   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:25.380256   15606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 10:39:25.380284   15606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 10:39:25.395868   15606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33355
	I0422 10:39:25.396314   15606 main.go:141] libmachine: () Calling .GetVersion
	I0422 10:39:25.396838   15606 main.go:141] libmachine: Using API Version  1
	I0422 10:39:25.396865   15606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 10:39:25.397165   15606 main.go:141] libmachine: () Calling .GetMachineName
	I0422 10:39:25.397370   15606 main.go:141] libmachine: (addons-649657) Calling .GetState
	I0422 10:39:25.399054   15606 main.go:141] libmachine: (addons-649657) Calling .DriverName
	I0422 10:39:25.399268   15606 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0422 10:39:25.399290   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHHostname
	I0422 10:39:25.401851   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:25.402216   15606 main.go:141] libmachine: (addons-649657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:fb:c8", ip: ""} in network mk-addons-649657: {Iface:virbr1 ExpiryTime:2024-04-22 11:38:39 +0000 UTC Type:0 Mac:52:54:00:fd:fb:c8 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:addons-649657 Clientid:01:52:54:00:fd:fb:c8}
	I0422 10:39:25.402242   15606 main.go:141] libmachine: (addons-649657) DBG | domain addons-649657 has defined IP address 192.168.39.194 and MAC address 52:54:00:fd:fb:c8 in network mk-addons-649657
	I0422 10:39:25.402403   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHPort
	I0422 10:39:25.402563   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHKeyPath
	I0422 10:39:25.402708   15606 main.go:141] libmachine: (addons-649657) Calling .GetSSHUsername
	I0422 10:39:25.402868   15606 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/addons-649657/id_rsa Username:docker}
	I0422 10:39:27.783212   15606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.319725015s)
	I0422 10:39:27.783274   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.783287   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.783285   15606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.185427025s)
	I0422 10:39:27.783325   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.783342   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.783381   15606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.818914555s)
	I0422 10:39:27.783414   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.783327   15606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.182561698s)
	I0422 10:39:27.783430   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.783447   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.783461   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.783505   15606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.743122797s)
	I0422 10:39:27.783535   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.783547   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.783625   15606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.701301891s)
	I0422 10:39:27.783647   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.783658   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.783685   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.783712   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.783729   15606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.558591613s)
	I0422 10:39:27.783737   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.783745   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.783753   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.783761   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.783773   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.783782   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.783846   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.783845   15606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.467931612s)
	I0422 10:39:27.783847   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.783864   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.783872   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.783881   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.783889   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.783889   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.783909   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.783873   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.783934   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.783947   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.783956   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.783981   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.784042   15606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.594505418s)
	I0422 10:39:27.784077   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.784091   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.784051   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.784154   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.784163   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.784171   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.783937   15606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.051686648s)
	W0422 10:39:27.784217   15606 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0422 10:39:27.784235   15606 retry.go:31] will retry after 158.301195ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0422 10:39:27.784287   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.784298   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.784311   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.784319   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.784343   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.784383   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.784398   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.784416   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.784417   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.784423   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.784428   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.784431   15606 addons.go:470] Verifying addon ingress=true in "addons-649657"
	I0422 10:39:27.784437   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.784447   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.788055   15606 out.go:177] * Verifying ingress addon...
	I0422 10:39:27.784515   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.784538   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.785301   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.785325   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.785340   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.785355   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.785369   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.785387   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.785399   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.785418   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.785431   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.785448   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.785484   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.786588   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.786609   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.789373   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.789387   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.789390   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.789407   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.789417   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.789438   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.789447   15606 addons.go:470] Verifying addon registry=true in "addons-649657"
	I0422 10:39:27.789474   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.789488   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.789497   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.789498   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.789509   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.789509   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.789516   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.791523   15606 out.go:177] * Verifying registry addon...
	I0422 10:39:27.789590   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.789770   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.789789   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.789817   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.789842   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.789860   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.789859   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.790283   15606 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0422 10:39:27.792805   15606 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-649657 service yakd-dashboard -n yakd-dashboard
	
	I0422 10:39:27.792853   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.792864   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.793897   15606 addons.go:470] Verifying addon metrics-server=true in "addons-649657"
	I0422 10:39:27.792880   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.793652   15606 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0422 10:39:27.855511   15606 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0422 10:39:27.855534   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:27.856217   15606 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0422 10:39:27.856233   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:27.875605   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.875630   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.875965   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.876006   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.876014   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	W0422 10:39:27.876093   15606 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0422 10:39:27.883802   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:27.883829   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:27.884125   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:27.884146   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:27.884157   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:27.943389   15606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0422 10:39:28.299257   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:28.299823   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:28.801199   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:28.804793   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:29.298871   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:29.300684   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:29.802138   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:29.802287   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:30.242588   15606 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (5.942836611s)
	I0422 10:39:30.242614   15606 system_svc.go:56] duration metric: took 5.942935897s WaitForService to wait for kubelet
	I0422 10:39:30.242622   15606 kubeadm.go:576] duration metric: took 12.396053479s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 10:39:30.242638   15606 node_conditions.go:102] verifying NodePressure condition ...
	I0422 10:39:30.242593   15606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.21349134s)
	I0422 10:39:30.242664   15606 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.843379723s)
	I0422 10:39:30.242696   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:30.242715   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:30.244302   15606 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0422 10:39:30.243051   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:30.243088   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:30.245924   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:30.247284   15606 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0422 10:39:30.245942   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:30.248471   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:30.248520   15606 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0422 10:39:30.248541   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0422 10:39:30.248707   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:30.248759   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:30.248788   15606 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-649657"
	I0422 10:39:30.248738   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:30.250241   15606 out.go:177] * Verifying csi-hostpath-driver addon...
	I0422 10:39:30.252590   15606 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0422 10:39:30.258210   15606 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 10:39:30.258251   15606 node_conditions.go:123] node cpu capacity is 2
	I0422 10:39:30.258263   15606 node_conditions.go:105] duration metric: took 15.621056ms to run NodePressure ...
	I0422 10:39:30.258277   15606 start.go:240] waiting for startup goroutines ...
	I0422 10:39:30.265074   15606 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0422 10:39:30.265093   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:30.296984   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:30.299915   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:30.419621   15606 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0422 10:39:30.419650   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0422 10:39:30.471923   15606 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0422 10:39:30.471950   15606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0422 10:39:30.526450   15606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0422 10:39:30.547580   15606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.604143349s)
	I0422 10:39:30.547641   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:30.547658   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:30.547924   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:30.547945   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:30.547953   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:30.547961   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:30.547966   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:30.548269   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:30.548305   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:30.548311   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:30.758827   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:30.797700   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:30.801195   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:31.271724   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:31.300487   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:31.300613   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:31.761639   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:31.815098   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:31.819575   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:32.240516   15606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.714017942s)
	I0422 10:39:32.240564   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:32.240577   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:32.240858   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:32.240930   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:32.240949   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:32.240984   15606 main.go:141] libmachine: Making call to close driver server
	I0422 10:39:32.240996   15606 main.go:141] libmachine: (addons-649657) Calling .Close
	I0422 10:39:32.241309   15606 main.go:141] libmachine: (addons-649657) DBG | Closing plugin on server side
	I0422 10:39:32.241358   15606 main.go:141] libmachine: Successfully made call to close driver server
	I0422 10:39:32.241371   15606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 10:39:32.242858   15606 addons.go:470] Verifying addon gcp-auth=true in "addons-649657"
	I0422 10:39:32.244792   15606 out.go:177] * Verifying gcp-auth addon...
	I0422 10:39:32.246984   15606 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0422 10:39:32.266191   15606 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0422 10:39:32.266209   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:32.267080   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:32.300005   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:32.300196   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:32.751256   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:32.761955   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:32.797859   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:32.799616   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:33.265982   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:33.267181   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:33.297656   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:33.302788   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:33.751461   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:33.757816   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:33.798902   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:33.800497   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:34.250839   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:34.258828   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:34.297755   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:34.301280   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:34.750813   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:34.758807   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:34.796724   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:34.799756   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:35.251169   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:35.258406   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:35.298627   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:35.299001   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:35.750876   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:35.758841   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:35.797734   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:35.800889   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:36.253582   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:36.269805   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:36.301799   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:36.311060   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:36.750961   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:36.757145   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:36.797672   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:36.800879   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:37.251526   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:37.260658   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:37.304630   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:37.304881   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:37.751271   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:37.758046   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:37.797954   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:37.800328   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:38.251162   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:38.258429   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:38.297524   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:38.300386   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:38.752755   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:38.758989   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:38.797163   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:38.800149   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:39.251109   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:39.257602   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:39.297791   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:39.300169   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:39.751263   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:39.757901   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:39.797454   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:39.798721   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:40.251228   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:40.261909   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:40.300493   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:40.300699   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:40.751302   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:40.761298   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:40.797321   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:40.798468   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:41.251521   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:41.263996   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:41.298298   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:41.300270   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:41.751745   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:41.764737   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:41.797451   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:41.799675   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:42.251408   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:42.259193   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:42.297977   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:42.304050   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:42.751187   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:42.758257   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:42.797562   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:42.807459   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:43.250916   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:43.259221   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:43.297459   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:43.299596   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:43.750817   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:43.761029   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:43.799000   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:43.799066   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:44.251135   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:44.258704   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:44.299006   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:44.299086   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:44.750327   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:44.758248   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:44.797967   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:44.800002   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:45.250557   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:45.258064   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:45.298860   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:45.299654   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:45.751219   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:45.758124   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:45.798370   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:45.798932   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:46.252029   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:46.258806   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:46.297145   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:46.300690   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:46.751268   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:46.757851   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:46.805402   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:46.806737   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:47.251439   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:47.258208   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:47.297444   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:47.299881   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:47.751870   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:47.759221   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:47.797544   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:47.799709   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:48.251071   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:48.258353   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:48.303375   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:48.305920   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:48.751827   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:48.759582   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:48.798619   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:48.799679   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:49.251198   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:49.259409   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:49.299548   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:49.301012   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:49.755469   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:49.761322   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:49.797483   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:49.798720   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:50.251641   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:50.258085   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:50.299914   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:50.299992   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:50.751760   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:50.758264   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:50.799520   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:50.802665   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:51.251538   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:51.263946   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:51.297675   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:51.298695   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:51.753049   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:51.761455   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:51.797861   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:51.799508   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:52.251559   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:52.257749   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:52.299927   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:52.308948   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:52.751296   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:52.759782   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:52.798902   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:52.798910   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:53.250749   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:53.258689   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:53.297244   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:53.299761   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:53.751892   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:53.759152   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:53.798507   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:53.803716   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:54.250975   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:54.257630   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:54.300665   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:54.300782   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:54.751392   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:54.758392   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:54.798282   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:54.800575   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:55.251324   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:55.257476   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:55.300423   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:55.300764   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:55.751775   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:55.763339   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:55.798034   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:55.799675   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:56.251303   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:56.258338   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:56.298365   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:56.298393   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:56.752173   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:56.761943   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:56.797699   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:56.804969   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:57.252199   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:57.257182   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:57.298155   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:57.303012   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:57.750666   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:57.758383   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:57.798762   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:57.799070   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:58.251655   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:58.258124   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:58.298880   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:58.299905   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:58.751840   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:58.766025   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:58.797739   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:58.801680   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:59.250989   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:59.259445   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:59.298158   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:59.314169   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:39:59.752736   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:39:59.757143   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:39:59.798136   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:39:59.805716   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:00.250990   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:00.258280   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:00.301503   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:00.302277   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:00.753021   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:00.759186   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:00.799743   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:00.801834   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:01.251155   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:01.259226   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:01.299550   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:01.299707   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:01.751712   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:01.764358   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:01.799014   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:01.799754   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:02.251177   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:02.258254   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:02.298193   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:02.299894   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:02.752958   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:02.758480   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:02.801194   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:02.812069   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:03.251464   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:03.258517   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:03.299302   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:03.299873   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:03.751050   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:03.765175   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:03.798038   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:03.799223   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:04.251728   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:04.258823   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:04.298274   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:04.300902   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:04.752103   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:04.758199   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:04.799781   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:04.801007   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:05.251681   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:05.258298   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:05.297918   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:05.300154   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:05.751029   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:05.761342   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:05.797917   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:05.799735   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:06.251578   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:06.258433   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:06.298829   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:06.301452   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:06.750936   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:06.757849   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:06.801027   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:06.803421   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:07.250944   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:07.258624   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:07.297089   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:07.299015   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:07.750416   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:07.758764   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:07.798767   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:07.805978   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:08.253277   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:08.259792   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:08.298214   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:08.298678   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:08.751546   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:08.757982   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:08.798634   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:08.799169   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:09.250986   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:09.257529   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:09.299385   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:09.300953   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:09.751385   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:09.758185   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:09.799208   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:09.799616   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:10.251445   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:10.257823   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:10.298028   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:10.298342   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:10.750909   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:10.761964   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:10.798381   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:10.800410   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:11.252543   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:11.258781   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:11.299932   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:11.306132   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:11.750600   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:11.758905   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:11.797691   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:11.802223   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:12.253640   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:12.260688   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:12.297104   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:12.301292   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:12.750611   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:12.758383   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:12.801802   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:12.803665   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:13.250966   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:13.258244   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:13.298662   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:13.300326   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:13.751050   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:13.758361   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:13.798608   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:13.800389   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:14.252367   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:14.267261   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:14.302906   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:14.303394   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:14.798046   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:14.798265   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:14.802663   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:14.802706   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:15.251221   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:15.257621   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:15.298483   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:15.300370   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:15.751142   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:15.757914   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:15.812832   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:15.816012   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:16.251522   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:16.258129   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:16.298113   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:16.298212   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:16.750663   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:16.775589   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:16.797885   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:16.799887   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:17.251144   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:17.257909   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:17.298004   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:17.300266   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:17.750661   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:17.758816   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:17.797701   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:17.799348   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:18.250691   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:18.258501   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:18.297446   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:18.300071   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:18.750497   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:18.757679   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:18.796765   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:18.798809   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:19.251794   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:19.259235   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:19.297506   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:19.298969   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:19.853554   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:19.853878   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:19.854038   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:19.855319   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:20.252209   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:20.257756   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:20.299011   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:20.299063   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:20.751382   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:20.757832   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:20.806441   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:20.806452   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:21.250726   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:21.258060   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:21.298943   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:21.300858   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:21.752081   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:21.757735   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:21.797344   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:21.799600   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:22.251859   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:22.259630   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:22.298901   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:22.304069   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:22.751603   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:22.761980   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:22.797775   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:22.798953   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 10:40:23.251760   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:23.258171   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:23.299201   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:23.300634   15606 kapi.go:107] duration metric: took 55.506982184s to wait for kubernetes.io/minikube-addons=registry ...
	I0422 10:40:23.753660   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:23.762677   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:23.798049   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:24.251854   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:24.259414   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:24.298371   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:24.751919   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:24.758375   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:24.798104   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:25.251811   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:25.259688   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:25.298302   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:25.751683   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:25.759321   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:25.798207   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:26.252940   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:26.259705   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:26.301105   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:26.751562   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:26.759990   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:26.798134   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:27.252558   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:27.261576   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:27.296830   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:27.753534   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:27.758909   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:27.797134   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:28.252049   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:28.260236   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:28.299520   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:28.751892   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:28.760373   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:28.797051   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:29.250986   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:29.261839   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:29.298023   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:29.751572   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:29.760416   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:29.801131   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:30.252023   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:30.259332   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:30.298473   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:30.752049   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:30.758630   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:30.798145   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:31.251861   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:31.257619   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:31.298754   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:31.752158   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:31.762633   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:31.796985   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:32.430548   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:32.431278   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:32.434600   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:32.751309   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:32.757904   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:32.799199   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:33.254139   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:33.259089   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:33.302753   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:33.751922   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:33.759486   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:33.796719   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:34.251844   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:34.259913   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:34.297216   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:34.751487   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:34.758188   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:34.799094   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:35.252076   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:35.257462   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:35.306254   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:35.750805   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:35.758567   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:35.796693   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:36.250929   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:36.258380   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:36.299296   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:36.752208   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:36.758150   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:36.797833   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:37.251178   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:37.258048   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:37.298653   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:37.924941   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:37.925237   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:37.931565   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:38.251058   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:38.261300   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:38.299242   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:38.763012   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:38.766879   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:38.797456   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:39.254956   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:39.288453   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:39.307320   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:39.771053   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:39.775698   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:39.806650   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:40.251281   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:40.259579   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:40.297788   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:40.752516   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:40.759435   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:40.798584   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:41.252858   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:41.273873   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:41.297825   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:41.751233   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:41.757940   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:41.796850   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:42.251375   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:42.257975   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:42.300288   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:42.751486   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:42.758026   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:42.797822   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:43.258102   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:43.263026   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:43.298426   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:43.754545   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:43.762931   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:43.800726   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:44.250868   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:44.258552   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:44.297608   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:44.751134   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:44.758548   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:44.797161   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:45.251529   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:45.259102   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:45.297321   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:45.754959   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:45.782860   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:45.797636   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:46.251351   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:46.258114   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:46.297497   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:46.756020   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:46.759590   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:46.801551   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:47.534483   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:47.534708   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:47.535110   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:47.751724   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:47.762425   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 10:40:47.797209   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:48.251760   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:48.258445   15606 kapi.go:107] duration metric: took 1m18.005855744s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0422 10:40:48.301698   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:48.751218   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:48.798612   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:49.251264   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:49.298955   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:49.751668   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:49.802779   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:50.251195   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:50.297484   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:50.751843   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:50.799236   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:51.250573   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:51.297974   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:51.751962   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:51.796947   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:52.250884   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:52.298851   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:52.751207   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:52.797286   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:53.251647   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:53.298467   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:53.750890   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:53.798417   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:54.251631   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:54.298183   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:54.750683   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:54.798028   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:55.251792   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:55.298902   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:55.751465   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:55.798679   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:56.250737   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:56.300354   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:56.751588   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:56.797380   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:57.250484   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:57.297658   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:57.751703   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:57.798010   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:58.295574   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:58.303613   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:58.750629   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:58.797817   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:59.251117   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:59.298418   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:40:59.750592   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:40:59.797730   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:00.251575   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:00.300993   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:00.751855   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:00.798247   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:01.250654   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:01.297955   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:01.751670   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:01.797413   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:02.250616   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:02.298094   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:02.752564   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:02.798887   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:03.252402   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:03.298411   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:03.750773   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:03.798066   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:04.251821   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:04.297927   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:04.751137   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:04.797132   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:05.251887   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:05.298236   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:05.751184   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:05.797400   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:06.251202   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:06.297780   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:06.751764   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:06.798132   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:07.252070   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:07.297125   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:07.752050   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:07.797336   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:08.251123   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:08.297588   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:08.755796   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:08.798633   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:09.251016   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:09.298406   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:09.751102   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:09.798069   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:10.251876   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:10.298188   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:10.751620   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:10.798800   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:11.251034   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:11.297051   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:11.751410   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:11.798269   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:12.251431   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:12.298630   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:12.751250   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:12.798501   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:13.251631   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:13.298223   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:13.750380   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:13.797923   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:14.251731   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:14.298277   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:14.750823   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:14.798099   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:15.251884   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:15.300452   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:15.750855   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:15.798440   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:16.251296   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:16.297527   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:16.752451   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:16.798551   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:17.250864   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:17.298478   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:17.750939   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:17.799633   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:18.250835   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:18.297891   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:18.751241   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:18.797445   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:19.250367   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:19.298783   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:19.751053   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:19.799373   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:20.251006   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:20.298159   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:20.751223   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:20.797350   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:21.250591   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:21.299166   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:21.751771   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:21.797746   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:22.251097   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:22.298013   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:22.750708   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:22.797828   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:23.251683   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:23.297817   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:23.752014   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:23.799643   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:24.251123   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:24.297842   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:24.752707   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:24.797899   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:25.251909   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:25.298492   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:25.750630   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:25.799055   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:26.251965   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:26.299100   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:26.751733   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:26.803501   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:27.251221   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:27.297546   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:27.750260   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:27.799089   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:28.251648   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:28.297612   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:28.750560   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:28.797944   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:29.250924   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:29.301473   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:29.751510   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:29.797721   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:30.251385   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:30.298764   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:30.751068   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:30.799710   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:31.250821   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:31.298364   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:31.750579   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:31.797897   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:32.252247   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:32.297545   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:32.750749   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:32.797648   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:33.252225   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:33.297169   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:33.751792   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:33.798045   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:34.251913   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:34.298671   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:34.750682   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:34.797741   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:35.251434   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:35.298521   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:35.750556   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:35.797659   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:36.251429   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:36.297586   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:36.751156   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:36.797627   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:37.251265   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:37.297466   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:37.750859   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:37.798212   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:38.251828   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:38.298315   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:38.750913   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:38.798410   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:39.251586   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:39.298998   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:39.750993   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:39.798799   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:40.251181   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:40.298263   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:40.750176   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:40.799171   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:41.250615   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:41.298160   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:41.751472   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:41.798586   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:42.250952   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:42.297870   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:42.751007   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:42.796857   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:43.251099   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:43.297604   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:43.750936   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:43.798524   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:44.250936   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:44.300719   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:44.751303   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:44.797720   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:45.251005   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:45.297358   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:45.752199   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:45.797628   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:46.251149   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:46.298631   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:46.751237   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:46.797269   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:47.251230   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:47.297670   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:47.752174   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:47.797156   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:48.250652   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:48.297732   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:48.751025   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:48.797948   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:49.251697   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:49.297757   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:49.751068   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:49.797897   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:50.251703   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:50.298681   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:50.754086   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:50.797169   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:51.251485   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:51.298835   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:51.750600   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:51.800432   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:52.254810   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:52.298168   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:52.751249   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:52.798498   15606 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 10:41:53.251632   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:53.297710   15606 kapi.go:107] duration metric: took 2m25.507425854s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0422 10:41:53.750790   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:54.250596   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:54.752331   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:55.251034   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:55.750688   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:56.251796   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:56.755417   15606 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 10:41:57.251390   15606 kapi.go:107] duration metric: took 2m25.004403033s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0422 10:41:57.253407   15606 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-649657 cluster.
	I0422 10:41:57.254932   15606 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0422 10:41:57.256417   15606 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0422 10:41:57.257848   15606 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, helm-tiller, yakd, inspektor-gadget, metrics-server, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0422 10:41:57.259177   15606 addons.go:505] duration metric: took 2m39.412582042s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner ingress-dns helm-tiller yakd inspektor-gadget metrics-server default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0422 10:41:57.259217   15606 start.go:245] waiting for cluster config update ...
	I0422 10:41:57.259238   15606 start.go:254] writing updated cluster config ...
	I0422 10:41:57.259503   15606 ssh_runner.go:195] Run: rm -f paused
	I0422 10:41:57.312602   15606 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0422 10:41:57.314485   15606 out.go:177] * Done! kubectl is now configured to use "addons-649657" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 22 10:48:03 addons-649657 crio[680]: time="2024-04-22 10:48:03.166236491Z" level=debug msg="Container or sandbox exited: b711b3fb32b9e1e207139b942bfc0a0e37a216f73e5ab53dbf22fbde2e1de3b4" file="server/server.go:810"
	Apr 22 10:48:03 addons-649657 crio[680]: time="2024-04-22 10:48:03.166255280Z" level=debug msg="sandbox infra exited and found: b711b3fb32b9e1e207139b942bfc0a0e37a216f73e5ab53dbf22fbde2e1de3b4" file="server/server.go:825"
	Apr 22 10:48:03 addons-649657 crio[680]: time="2024-04-22 10:48:03.171467225Z" level=debug msg="Event: RENAME        \"/var/run/crio/exits/b711b3fb32b9e1e207139b942bfc0a0e37a216f73e5ab53dbf22fbde2e1de3b4.WRIKM2\"" file="server/server.go:805"
	Apr 22 10:48:03 addons-649657 crio[680]: time="2024-04-22 10:48:03.171863868Z" level=debug msg="Unmounted container b711b3fb32b9e1e207139b942bfc0a0e37a216f73e5ab53dbf22fbde2e1de3b4" file="storage/runtime.go:495" id=2ccc3935-e1e7-46a5-b4ac-075ed98f903e name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 22 10:48:03 addons-649657 crio[680]: time="2024-04-22 10:48:03.189845462Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=49cef200-6798-4249-aedb-8734f80cf9d6 name=/runtime.v1.RuntimeService/Version
	Apr 22 10:48:03 addons-649657 crio[680]: time="2024-04-22 10:48:03.190746699Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=49cef200-6798-4249-aedb-8734f80cf9d6 name=/runtime.v1.RuntimeService/Version
	Apr 22 10:48:03 addons-649657 crio[680]: time="2024-04-22 10:48:03.193130373Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=31e355ed-6ed5-4eb3-871e-3db016270130 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 10:48:03 addons-649657 crio[680]: time="2024-04-22 10:48:03.194691055Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713782883194660049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579877,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=31e355ed-6ed5-4eb3-871e-3db016270130 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 10:48:03 addons-649657 crio[680]: time="2024-04-22 10:48:03.195709525Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54e7e20d-32a3-492e-b9d9-1d50b28658ca name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 10:48:03 addons-649657 crio[680]: time="2024-04-22 10:48:03.195881474Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54e7e20d-32a3-492e-b9d9-1d50b28658ca name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 10:48:03 addons-649657 crio[680]: time="2024-04-22 10:48:03.196226208Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:da39809ea20386d490cfaf8219db59f46bbdddd9c3b9ef9efdb5ff5f38a11628,PodSandboxId:b17e14cc83602824420c9600bcfc007ad47de22842fe4f413090080028e485b4,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1713782702132634183,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-wrvcz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 687a0353-5ece-41c2-8d6f-fe72342f0226,},Annotations:map[string]string{io.kubernetes.container.hash: 3544bfa3,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7239d5ee0e6269bfa36420836c8ed9f52cab4eefffadf575e3b318f44f571dc2,PodSandboxId:c8d2c78f493f4470c16b4971a5ca931f9f6440b4f519e820e25f57bc352316d5,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1713782565768368210,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-bb5x7,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: e60be124-1cbb-461a-b07a-c7ad8934897d,},Annota
tions:map[string]string{io.kubernetes.container.hash: dc4e7fa6,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13c00009bfd8e12ddd3cd3e442734252bf3b33de37d4221b1c728eb6cf1260a7,PodSandboxId:13b8c3288617978a6bd4f51de5a0b637795ded04d7701c43becda5ec0be110a1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:542a383900f6fdcc90e1b7341f6889145d43f839f35f608b4de7821a77ca54d9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:11d76b979f02dc27a70e18a7d6de3451ce604f88dba049d4aa2b95225bb4c9ba,State:CONTAINER_RUNNING,CreatedAt:1713782559252314783,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 4635a414-2076-41e6-b935-fd98104af18f,},Annotations:map[string]string{io.kubernetes.container.hash: 59547dc7,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98ce745d0650b4b452c0bafed8251ffb60086e77a34f7a677a87af3eb5451dd6,PodSandboxId:0089ee81a28640fa1a90a60bb9e1b3c80c2d23f75a963dc2ca2af0c5fc3aaa10,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1713782516590170356,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-9bc6d,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: aafecea0-aca4-4896-8e41-40e809b7f5e6,},Annotations:map[string]string{io.kubernetes.container.hash: d6b5e118,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82837449f00a1cb15cf7b006d0101df1980e30f5ef698f0292f8a651cbd753c2,PodSandboxId:e87eadec82e10c1701c89969856aaafbd4070e52efd8817a92d9d74699dd7a5b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,Stat
e:CONTAINER_RUNNING,CreatedAt:1713782425464910741,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-s7lgz,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3a83e43e-cd63-407e-aab4-be83ab5f77f8,},Annotations:map[string]string{io.kubernetes.container.hash: 300bd0b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b5629db3b79bb4ff9c4eb32fbca70aee4c1d8b18df6f187a73b65df7032c571,PodSandboxId:a582669af38b5231f48d12ecc1fa1a647cccb1677168e44b30e7bc8fb3805fe0,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f
75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1713782415095134592,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-rz9f2,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 5d5608ee-50a3-46d3-9363-9bef97083ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 27ac0260,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:747bedb41a6513d6f0ed3d498d50de4156ef98e6ab9e372c254f10629802adfe,PodSandboxId:b711b3fb32b9e1e207139b942bfc0a0e37a216f73e5ab53dbf22fbde2e1de3b4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e41
2e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_EXITED,CreatedAt:1713782398948427594,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-phnbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce74ad1e-3a35-470e-962e-901dcdc84a6d,},Annotations:map[string]string{io.kubernetes.container.hash: b13acbe3,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dca8963cd05ebf8e5c2ab895728ad12bcce57a49e34622e946bd3d0130d46b17,PodSandboxId:8f9a4ee47413b897023b0adcccede17a3cdcd71a8350a3303689eafcd2eabf67,Metadata:&ContainerMetadata{Name:storage-pro
visioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713782365309632220,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7923bd-3f6b-44d8-846c-ed7eee65a6df,},Annotations:map[string]string{io.kubernetes.container.hash: e52f1f3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e399e5bad2ea7afff9b2984de13eba02623820f9d265c24111fb4f7ca6de5c,PodSandboxId:8fada0962ee40a9c874c76d55e10b0575be2ba864816e8a92688313389381590,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Ima
ge:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713782363333380774,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2mxqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa2ffe62-c568-4ca9-b23a-2976185dc0c0,},Annotations:map[string]string{io.kubernetes.container.hash: a67ae4fb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc4a
e3f334eb8f15a02ce1cb74c938edda287420283c1625060ec6de34223cfc,PodSandboxId:62aa08616b67da6632a53210cfbbdcef6c311a35aae53ae9364e167f48faf281,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713782360160734997,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hlgg9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 478bfbcb-c8d1-4a0b-b13c-84e8892d1d3e,},Annotations:map[string]string{io.kubernetes.container.hash: 9854acf8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:228b3107896998ed67a5c41465c19156bb68c
40d0b7d32997369f4ceea0e9199,PodSandboxId:75be32aa73a6fdf9bbf430ca63dcb63c2f8f13d58e9d91b7f9206327239a5f46,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713782339938654563,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-649657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ee34a8718d450cdc971ff15e6bcf368,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49a61e188d41b17b7cae7258b2e08215974cb51d7f7cb89893a9e4
eb40fc5a3d,PodSandboxId:50ca3ae500a2e0a6107d981b0924c139233deff63e724a3a01b355cb298b8b17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713782339926609237,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-649657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a98542d021c1579a6297e229b3c72ace,},Annotations:map[string]string{io.kubernetes.container.hash: f56b21d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e87c8e5071b3101013333015fc0e2d11e262168ef3ae336c3da95c8911871553,PodSan
dboxId:92a4ef4ade159a5ee065deb5945fd4c857ccacf6e702b5496a97bdc22bcfe791,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713782339846013301,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-649657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7fa9beead7b52a4e887c1dc4431871,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dce2c24d181494f5a032b1ca97445bc0c8ca16e280f781f2fe9667680c6f
f00,PodSandboxId:701814d87562835b2262a0ad5c2424dca08ae4bc77de5e34afcd4ebc6da23a1f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713782339798738126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-649657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1b0f2f70abeaca245aa3f96738d8202,},Annotations:map[string]string{io.kubernetes.container.hash: 78f47633,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54e7e20d-32a3-492e-b9d9-1d50b28658ca name=/runtime.v1.RuntimeService/List
Containers
	Apr 22 10:48:03 addons-649657 crio[680]: time="2024-04-22 10:48:03.205872262Z" level=debug msg="Found exit code for b711b3fb32b9e1e207139b942bfc0a0e37a216f73e5ab53dbf22fbde2e1de3b4: 0" file="oci/runtime_oci.go:1022"
	Apr 22 10:48:03 addons-649657 crio[680]: time="2024-04-22 10:48:03.206000735Z" level=debug msg="Skipping status update for: &{State:{Version:1.0.2-dev ID:b711b3fb32b9e1e207139b942bfc0a0e37a216f73e5ab53dbf22fbde2e1de3b4 Status:stopped Pid:0 Bundle:/run/containers/storage/overlay-containers/b711b3fb32b9e1e207139b942bfc0a0e37a216f73e5ab53dbf22fbde2e1de3b4/userdata Annotations:map[io.container.manager:cri-o io.kubernetes.container.name:POD io.kubernetes.cri-o.Annotations:{\"kubernetes.io/config.source\":\"api\",\"kubernetes.io/config.seen\":\"2024-04-22T10:39:24.243289527Z\"} io.kubernetes.cri-o.CNIResult:{\"cniVersion\":\"1.0.0\",\"interfaces\":[{\"name\":\"bridge\",\"mac\":\"8e:3e:37:fd:1d:89\"},{\"name\":\"veth5127eafd\",\"mac\":\"d6:73:52:46:b6:84\"},{\"name\":\"eth0\",\"mac\":\"a2:9f:55:73:e4:f9\",\"sandbox\":\"/var/run/netns/d9b0db00-4633-4ada-89a1-f06759803c0f\"}],\"ips\":[{\"interface\":2,\"address\":\"10.244.0.7/16\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.244
.0.1\"}],\"dns\":{}} io.kubernetes.cri-o.CgroupParent:/kubepods/burstable/podce74ad1e-3a35-470e-962e-901dcdc84a6d io.kubernetes.cri-o.ContainerID:b711b3fb32b9e1e207139b942bfc0a0e37a216f73e5ab53dbf22fbde2e1de3b4 io.kubernetes.cri-o.ContainerName:k8s_POD_metrics-server-c59844bb4-phnbq_kube-system_ce74ad1e-3a35-470e-962e-901dcdc84a6d_0 io.kubernetes.cri-o.ContainerType:sandbox io.kubernetes.cri-o.Created:2024-04-22T10:39:24.575072865Z io.kubernetes.cri-o.HostName:metrics-server-c59844bb4-phnbq io.kubernetes.cri-o.HostNetwork:false io.kubernetes.cri-o.HostnamePath:/var/run/containers/storage/overlay-containers/b711b3fb32b9e1e207139b942bfc0a0e37a216f73e5ab53dbf22fbde2e1de3b4/userdata/hostname io.kubernetes.cri-o.Image:registry.k8s.io/pause:3.9 io.kubernetes.cri-o.ImageName:registry.k8s.io/pause:3.9 io.kubernetes.cri-o.KubeName:metrics-server-c59844bb4-phnbq io.kubernetes.cri-o.Labels:{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"ce74ad1e-3a35-470e-962e-901dcdc84a6d\",\"io.kubernetes.pod.nam
espace\":\"kube-system\",\"io.kubernetes.pod.name\":\"metrics-server-c59844bb4-phnbq\",\"k8s-app\":\"metrics-server\",\"pod-template-hash\":\"c59844bb4\"} io.kubernetes.cri-o.LogPath:/var/log/pods/kube-system_metrics-server-c59844bb4-phnbq_ce74ad1e-3a35-470e-962e-901dcdc84a6d/b711b3fb32b9e1e207139b942bfc0a0e37a216f73e5ab53dbf22fbde2e1de3b4.log io.kubernetes.cri-o.Metadata:{\"name\":\"metrics-server-c59844bb4-phnbq\",\"uid\":\"ce74ad1e-3a35-470e-962e-901dcdc84a6d\",\"namespace\":\"kube-system\"} io.kubernetes.cri-o.MountPoint:/var/lib/containers/storage/overlay/37c61e4f16acbc97487c88109af1412f306215d9504861e5188adf196ab902bf/merged io.kubernetes.cri-o.Name:k8s_metrics-server-c59844bb4-phnbq_kube-system_ce74ad1e-3a35-470e-962e-901dcdc84a6d_0 io.kubernetes.cri-o.Namespace:kube-system io.kubernetes.cri-o.NamespaceOptions:{\"pid\":1} io.kubernetes.cri-o.PodLinuxOverhead:{} io.kubernetes.cri-o.PodLinuxResources:{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}} io.kubernetes.cri-o.P
ortMappings:[] io.kubernetes.cri-o.PrivilegedRuntime:false io.kubernetes.cri-o.ResolvPath:/var/run/containers/storage/overlay-containers/b711b3fb32b9e1e207139b942bfc0a0e37a216f73e5ab53dbf22fbde2e1de3b4/userdata/resolv.conf io.kubernetes.cri-o.RuntimeHandler: io.kubernetes.cri-o.SandboxID:b711b3fb32b9e1e207139b942bfc0a0e37a216f73e5ab53dbf22fbde2e1de3b4 io.kubernetes.cri-o.SandboxName:k8s_metrics-server-c59844bb4-phnbq_kube-system_ce74ad1e-3a35-470e-962e-901dcdc84a6d_0 io.kubernetes.cri-o.SeccompProfilePath:RuntimeDefault io.kubernetes.cri-o.ShmPath:/var/run/containers/storage/overlay-containers/b711b3fb32b9e1e207139b942bfc0a0e37a216f73e5ab53dbf22fbde2e1de3b4/userdata/shm io.kubernetes.pod.name:metrics-server-c59844bb4-phnbq io.kubernetes.pod.namespace:kube-system io.kubernetes.pod.uid:ce74ad1e-3a35-470e-962e-901dcdc84a6d k8s-app:metrics-server kubernetes.io/config.seen:2024-04-22T10:39:24.243289527Z kubernetes.io/config.source:api pod-template-hash:c59844bb4]} Created:2024-04-22 10:39:27.323364233 +0000 UTC St
arted:2024-04-22 10:39:27.457557559 +0000 UTC m=+37.994542193 Finished:2024-04-22 10:48:03.163927265 +0000 UTC ExitCode:0xc000a481f0 OOMKilled:false SeccompKilled:false Error: InitPid:2882 InitStartTime:5976 CheckpointedAt:0001-01-01 00:00:00 +0000 UTC}" file="oci/runtime_oci.go:946" id=2ccc3935-e1e7-46a5-b4ac-075ed98f903e name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 22 10:48:03 addons-649657 crio[680]: time="2024-04-22 10:48:03.212329661Z" level=debug msg="Event: REMOVE        \"/var/run/crio/exits/b711b3fb32b9e1e207139b942bfc0a0e37a216f73e5ab53dbf22fbde2e1de3b4\"" file="server/server.go:805"
	Apr 22 10:48:03 addons-649657 crio[680]: time="2024-04-22 10:48:03.213250395Z" level=info msg="Stopped pod sandbox: b711b3fb32b9e1e207139b942bfc0a0e37a216f73e5ab53dbf22fbde2e1de3b4" file="server/sandbox_stop_linux.go:91" id=2ccc3935-e1e7-46a5-b4ac-075ed98f903e name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 22 10:48:03 addons-649657 crio[680]: time="2024-04-22 10:48:03.213373070Z" level=debug msg="Response: &StopPodSandboxResponse{}" file="otel-collector/interceptors.go:74" id=2ccc3935-e1e7-46a5-b4ac-075ed98f903e name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 22 10:48:03 addons-649657 crio[680]: time="2024-04-22 10:48:03.216324879Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{io.kubernetes.pod.uid: ce74ad1e-3a35-470e-962e-901dcdc84a6d,},},}" file="otel-collector/interceptors.go:62" id=9c2b3ac3-f249-41da-a56e-dbfca3f7a140 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 22 10:48:03 addons-649657 crio[680]: time="2024-04-22 10:48:03.216436299Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b711b3fb32b9e1e207139b942bfc0a0e37a216f73e5ab53dbf22fbde2e1de3b4,Metadata:&PodSandboxMetadata{Name:metrics-server-c59844bb4-phnbq,Uid:ce74ad1e-3a35-470e-962e-901dcdc84a6d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713782364575072865,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-c59844bb4-phnbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce74ad1e-3a35-470e-962e-901dcdc84a6d,k8s-app: metrics-server,pod-template-hash: c59844bb4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T10:39:24.243289527Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=9c2b3ac3-f249-41da-a56e-dbfca3f7a140 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 22 10:48:03 addons-649657 crio[680]: time="2024-04-22 10:48:03.218552103Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:b711b3fb32b9e1e207139b942bfc0a0e37a216f73e5ab53dbf22fbde2e1de3b4,Verbose:false,}" file="otel-collector/interceptors.go:62" id=436d4f53-78b9-4e7e-bc6b-682c131c6717 name=/runtime.v1.RuntimeService/PodSandboxStatus
	Apr 22 10:48:03 addons-649657 crio[680]: time="2024-04-22 10:48:03.218672067Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:b711b3fb32b9e1e207139b942bfc0a0e37a216f73e5ab53dbf22fbde2e1de3b4,Metadata:&PodSandboxMetadata{Name:metrics-server-c59844bb4-phnbq,Uid:ce74ad1e-3a35-470e-962e-901dcdc84a6d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713782364575072865,Network:&PodSandboxNetworkStatus{Ip:10.244.0.7,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:nil,},},},Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-c59844bb4-phnbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce74ad1e-3a35-470e-962e-901dcdc84a6d,k8s-app: metrics-server,pod-template-hash: c59844bb4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T10:39:24.243289527Z,kubernetes.io/config.
source: api,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=436d4f53-78b9-4e7e-bc6b-682c131c6717 name=/runtime.v1.RuntimeService/PodSandboxStatus
	Apr 22 10:48:03 addons-649657 crio[680]: time="2024-04-22 10:48:03.220038853Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: ce74ad1e-3a35-470e-962e-901dcdc84a6d,},},}" file="otel-collector/interceptors.go:62" id=9f85e8e4-6daa-48b5-8c80-eae2b331c888 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 10:48:03 addons-649657 crio[680]: time="2024-04-22 10:48:03.220091295Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9f85e8e4-6daa-48b5-8c80-eae2b331c888 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 10:48:03 addons-649657 crio[680]: time="2024-04-22 10:48:03.220159995Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:747bedb41a6513d6f0ed3d498d50de4156ef98e6ab9e372c254f10629802adfe,PodSandboxId:b711b3fb32b9e1e207139b942bfc0a0e37a216f73e5ab53dbf22fbde2e1de3b4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_EXITED,CreatedAt:1713782398948427594,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-phnbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce74ad1e-3a35-470e-962e-901dcdc84a6d,},Annotations:map[string]string{io.kubernetes.container.hash: b13acbe3,io.kubern
etes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9f85e8e4-6daa-48b5-8c80-eae2b331c888 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 10:48:03 addons-649657 crio[680]: time="2024-04-22 10:48:03.223850402Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:747bedb41a6513d6f0ed3d498d50de4156ef98e6ab9e372c254f10629802adfe,Verbose:false,}" file="otel-collector/interceptors.go:62" id=bf67ab9e-1957-42a1-8e39-aabae2526fc6 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 22 10:48:03 addons-649657 crio[680]: time="2024-04-22 10:48:03.224099120Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:747bedb41a6513d6f0ed3d498d50de4156ef98e6ab9e372c254f10629802adfe,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},State:CONTAINER_EXITED,CreatedAt:1713782399003443417,StartedAt:1713782399032178176,FinishedAt:1713782882996919211,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:db3800085a0957083930c3932b17580eec652cfb6156a05c0f79c7543e80d17a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,Reason:Completed,Message:,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-phnbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce74ad1e-3a35-470e-962e-901dcdc84a6d,},Annotations:map[string]string{io.kubernetes.container.hash: b13acbe3
,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/var/lib/kubelet/pods/ce74ad1e-3a35-470e-962e-901dcdc84a6d/volumes/kubernetes.io~empty-dir/tmp-dir,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/ce74ad1e-3a35-470e-962e-901dcdc84a6d/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/ce74ad1e-3a35-470e-962e-901dcdc84a6d/containers/metrics-server/95508c8b,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_P
RIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/ce74ad1e-3a35-470e-962e-901dcdc84a6d/volumes/kubernetes.io~projected/kube-api-access-jm9tf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_metrics-server-c59844bb4-phnbq_ce74ad1e-3a35-470e-962e-901dcdc84a6d/metrics-server/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:948,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=bf67ab9e-1957-42a1-8e39-aabae2526fc6 name=/runtime.v1.RuntimeService/ContainerStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	da39809ea2038       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                 3 minutes ago       Running             hello-world-app           0                   b17e14cc83602       hello-world-app-86c47465fc-wrvcz
	7239d5ee0e626       ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f                   5 minutes ago       Running             headlamp                  0                   c8d2c78f493f4       headlamp-7559bf459f-bb5x7
	13c00009bfd8e       docker.io/library/nginx@sha256:542a383900f6fdcc90e1b7341f6889145d43f839f35f608b4de7821a77ca54d9                         5 minutes ago       Running             nginx                     0                   13b8c32886179       nginx
	98ce745d0650b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            6 minutes ago       Running             gcp-auth                  0                   0089ee81a2864       gcp-auth-5db96cd9b4-9bc6d
	82837449f00a1       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        7 minutes ago       Running             local-path-provisioner    0                   e87eadec82e10       local-path-provisioner-8d985888d-s7lgz
	3b5629db3b79b       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                         7 minutes ago       Running             yakd                      0                   a582669af38b5       yakd-dashboard-5ddbf7d777-rz9f2
	747bedb41a651       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   8 minutes ago       Exited              metrics-server            0                   b711b3fb32b9e       metrics-server-c59844bb4-phnbq
	dca8963cd05eb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        8 minutes ago       Running             storage-provisioner       0                   8f9a4ee47413b       storage-provisioner
	d8e399e5bad2e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        8 minutes ago       Running             coredns                   0                   8fada0962ee40       coredns-7db6d8ff4d-2mxqp
	cc4ae3f334eb8       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                                        8 minutes ago       Running             kube-proxy                0                   62aa08616b67d       kube-proxy-hlgg9
	228b310789699       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                                        9 minutes ago       Running             kube-scheduler            0                   75be32aa73a6f       kube-scheduler-addons-649657
	49a61e188d41b       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                                        9 minutes ago       Running             kube-apiserver            0                   50ca3ae500a2e       kube-apiserver-addons-649657
	e87c8e5071b31       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                                        9 minutes ago       Running             kube-controller-manager   0                   92a4ef4ade159       kube-controller-manager-addons-649657
	3dce2c24d1814       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        9 minutes ago       Running             etcd                      0                   701814d875628       etcd-addons-649657
	
	
	==> coredns [d8e399e5bad2ea7afff9b2984de13eba02623820f9d265c24111fb4f7ca6de5c] <==
	[INFO] 10.244.0.8:41916 - 38226 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.001142617s
	[INFO] 10.244.0.8:47573 - 53521 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000205944s
	[INFO] 10.244.0.8:47573 - 28703 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000109378s
	[INFO] 10.244.0.8:45327 - 40199 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000156094s
	[INFO] 10.244.0.8:45327 - 14341 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000223029s
	[INFO] 10.244.0.8:48774 - 51542 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00012261s
	[INFO] 10.244.0.8:48774 - 4695 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000153651s
	[INFO] 10.244.0.8:40647 - 10104 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000108597s
	[INFO] 10.244.0.8:40647 - 4733 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000122859s
	[INFO] 10.244.0.8:52147 - 48149 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00004062s
	[INFO] 10.244.0.8:52147 - 46870 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000043653s
	[INFO] 10.244.0.8:37217 - 19858 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000042755s
	[INFO] 10.244.0.8:37217 - 37008 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000029418s
	[INFO] 10.244.0.8:38214 - 52639 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000033957s
	[INFO] 10.244.0.8:38214 - 23697 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000039481s
	[INFO] 10.244.0.22:56933 - 22929 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000507854s
	[INFO] 10.244.0.22:57259 - 47395 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000367276s
	[INFO] 10.244.0.22:41688 - 8569 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000113891s
	[INFO] 10.244.0.22:43352 - 17413 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000100155s
	[INFO] 10.244.0.22:35586 - 62845 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000249472s
	[INFO] 10.244.0.22:57574 - 22388 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000224677s
	[INFO] 10.244.0.22:43388 - 34767 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001462991s
	[INFO] 10.244.0.22:47177 - 64805 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001191611s
	[INFO] 10.244.0.25:37565 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000421805s
	[INFO] 10.244.0.25:44554 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000091614s
	
	
	==> describe nodes <==
	Name:               addons-649657
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-649657
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437
	                    minikube.k8s.io/name=addons-649657
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_22T10_39_06_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-649657
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 10:39:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-649657
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 10:47:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 10:45:12 +0000   Mon, 22 Apr 2024 10:39:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 10:45:12 +0000   Mon, 22 Apr 2024 10:39:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 10:45:12 +0000   Mon, 22 Apr 2024 10:39:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 10:45:12 +0000   Mon, 22 Apr 2024 10:39:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.194
	  Hostname:    addons-649657
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 03fb13485e6f4e5fb50eeba42d90dd5d
	  System UUID:                03fb1348-5e6f-4e5f-b50e-eba42d90dd5d
	  Boot ID:                    df02515d-ac16-46de-9be1-a43fef15fe11
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-wrvcz          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m6s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m29s
	  gcp-auth                    gcp-auth-5db96cd9b4-9bc6d                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m31s
	  headlamp                    headlamp-7559bf459f-bb5x7                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m27s
	  kube-system                 coredns-7db6d8ff4d-2mxqp                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     8m45s
	  kube-system                 etcd-addons-649657                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         8m58s
	  kube-system                 kube-apiserver-addons-649657              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m58s
	  kube-system                 kube-controller-manager-addons-649657     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m
	  kube-system                 kube-proxy-hlgg9                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m45s
	  kube-system                 kube-scheduler-addons-649657              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m58s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m41s
	  local-path-storage          local-path-provisioner-8d985888d-s7lgz    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m39s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-rz9f2           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     8m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m42s                kube-proxy       
	  Normal  NodeHasSufficientMemory  9m4s (x8 over 9m4s)  kubelet          Node addons-649657 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m4s (x8 over 9m4s)  kubelet          Node addons-649657 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m4s (x7 over 9m4s)  kubelet          Node addons-649657 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m58s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m58s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m58s                kubelet          Node addons-649657 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m58s                kubelet          Node addons-649657 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m58s                kubelet          Node addons-649657 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m57s                kubelet          Node addons-649657 status is now: NodeReady
	  Normal  RegisteredNode           8m46s                node-controller  Node addons-649657 event: Registered Node addons-649657 in Controller
	
	
	==> dmesg <==
	[  +5.005001] kauditd_printk_skb: 98 callbacks suppressed
	[  +5.547445] kauditd_printk_skb: 93 callbacks suppressed
	[  +5.339872] kauditd_printk_skb: 104 callbacks suppressed
	[ +15.729026] kauditd_printk_skb: 29 callbacks suppressed
	[Apr22 10:40] kauditd_printk_skb: 4 callbacks suppressed
	[ +10.231293] kauditd_printk_skb: 30 callbacks suppressed
	[ +13.524435] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.601251] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.051333] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.987413] kauditd_printk_skb: 41 callbacks suppressed
	[Apr22 10:41] kauditd_printk_skb: 24 callbacks suppressed
	[ +40.821261] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.649567] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.564957] kauditd_printk_skb: 11 callbacks suppressed
	[Apr22 10:42] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.851118] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.410296] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.232521] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.362981] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.343604] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.522649] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.496562] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.655237] kauditd_printk_skb: 30 callbacks suppressed
	[Apr22 10:44] kauditd_printk_skb: 10 callbacks suppressed
	[Apr22 10:45] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [3dce2c24d181494f5a032b1ca97445bc0c8ca16e280f781f2fe9667680c6ff00] <==
	{"level":"info","ts":"2024-04-22T10:40:37.901021Z","caller":"traceutil/trace.go:171","msg":"trace[669615030] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1100; }","duration":"165.079261ms","start":"2024-04-22T10:40:37.735936Z","end":"2024-04-22T10:40:37.901015Z","steps":["trace[669615030] 'agreement among raft nodes before linearized reading'  (duration: 159.377586ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T10:40:37.897124Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.759375ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14358"}
	{"level":"info","ts":"2024-04-22T10:40:37.901211Z","caller":"traceutil/trace.go:171","msg":"trace[216867311] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1100; }","duration":"124.870842ms","start":"2024-04-22T10:40:37.776333Z","end":"2024-04-22T10:40:37.901203Z","steps":["trace[216867311] 'agreement among raft nodes before linearized reading'  (duration: 120.706175ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T10:40:47.50139Z","caller":"traceutil/trace.go:171","msg":"trace[489243040] linearizableReadLoop","detail":"{readStateIndex:1218; appliedIndex:1217; }","duration":"271.15835ms","start":"2024-04-22T10:40:47.230219Z","end":"2024-04-22T10:40:47.501377Z","steps":["trace[489243040] 'read index received'  (duration: 271.03185ms)","trace[489243040] 'applied index is now lower than readState.Index'  (duration: 125.94µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-22T10:40:47.501684Z","caller":"traceutil/trace.go:171","msg":"trace[1704839840] transaction","detail":"{read_only:false; response_revision:1180; number_of_response:1; }","duration":"312.928147ms","start":"2024-04-22T10:40:47.188746Z","end":"2024-04-22T10:40:47.501674Z","steps":["trace[1704839840] 'process raft request'  (duration: 312.542394ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T10:40:47.501873Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-22T10:40:47.18873Z","time spent":"313.002077ms","remote":"127.0.0.1:50692","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2186,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/snapshot-controller-745499f584\" mod_revision:1061 > success:<request_put:<key:\"/registry/replicasets/kube-system/snapshot-controller-745499f584\" value_size:2114 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/snapshot-controller-745499f584\" > >"}
	{"level":"warn","ts":"2024-04-22T10:40:47.502083Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"271.890046ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-04-22T10:40:47.502133Z","caller":"traceutil/trace.go:171","msg":"trace[100995071] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1180; }","duration":"271.960738ms","start":"2024-04-22T10:40:47.230165Z","end":"2024-04-22T10:40:47.502126Z","steps":["trace[100995071] 'agreement among raft nodes before linearized reading'  (duration: 271.843578ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T10:40:47.502424Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"267.126534ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85565"}
	{"level":"info","ts":"2024-04-22T10:40:47.502475Z","caller":"traceutil/trace.go:171","msg":"trace[1028049129] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1180; }","duration":"267.252649ms","start":"2024-04-22T10:40:47.235216Z","end":"2024-04-22T10:40:47.502468Z","steps":["trace[1028049129] 'agreement among raft nodes before linearized reading'  (duration: 267.027963ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T10:40:47.502716Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.181423ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-22T10:40:47.502853Z","caller":"traceutil/trace.go:171","msg":"trace[344330702] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1180; }","duration":"141.339792ms","start":"2024-04-22T10:40:47.361506Z","end":"2024-04-22T10:40:47.502846Z","steps":["trace[344330702] 'agreement among raft nodes before linearized reading'  (duration: 141.196561ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T10:40:47.502994Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.703964ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14358"}
	{"level":"info","ts":"2024-04-22T10:40:47.503039Z","caller":"traceutil/trace.go:171","msg":"trace[1642437512] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1180; }","duration":"227.768994ms","start":"2024-04-22T10:40:47.275264Z","end":"2024-04-22T10:40:47.503033Z","steps":["trace[1642437512] 'agreement among raft nodes before linearized reading'  (duration: 227.671336ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T10:41:56.105094Z","caller":"traceutil/trace.go:171","msg":"trace[243768274] transaction","detail":"{read_only:false; response_revision:1320; number_of_response:1; }","duration":"355.024115ms","start":"2024-04-22T10:41:55.750051Z","end":"2024-04-22T10:41:56.105075Z","steps":["trace[243768274] 'process raft request'  (duration: 354.927316ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T10:41:56.105349Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-22T10:41:55.750037Z","time spent":"355.259656ms","remote":"127.0.0.1:50512","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1299 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2024-04-22T10:42:05.190135Z","caller":"traceutil/trace.go:171","msg":"trace[8659231] transaction","detail":"{read_only:false; response_revision:1369; number_of_response:1; }","duration":"185.39644ms","start":"2024-04-22T10:42:05.004718Z","end":"2024-04-22T10:42:05.190115Z","steps":["trace[8659231] 'process raft request'  (duration: 185.221295ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T10:42:25.670361Z","caller":"traceutil/trace.go:171","msg":"trace[1108447390] linearizableReadLoop","detail":"{readStateIndex:1628; appliedIndex:1627; }","duration":"218.332811ms","start":"2024-04-22T10:42:25.451993Z","end":"2024-04-22T10:42:25.670326Z","steps":["trace[1108447390] 'read index received'  (duration: 218.181293ms)","trace[1108447390] 'applied index is now lower than readState.Index'  (duration: 150.936µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-22T10:42:25.670919Z","caller":"traceutil/trace.go:171","msg":"trace[474003058] transaction","detail":"{read_only:false; response_revision:1562; number_of_response:1; }","duration":"341.338256ms","start":"2024-04-22T10:42:25.329561Z","end":"2024-04-22T10:42:25.670899Z","steps":["trace[474003058] 'process raft request'  (duration: 340.65397ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T10:42:25.671193Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-22T10:42:25.329544Z","time spent":"341.453615ms","remote":"127.0.0.1:50414","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1534 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-04-22T10:42:25.671452Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"219.450237ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/test-local-path\" ","response":"range_response_count:1 size:3596"}
	{"level":"info","ts":"2024-04-22T10:42:25.671574Z","caller":"traceutil/trace.go:171","msg":"trace[1084977975] range","detail":"{range_begin:/registry/pods/default/test-local-path; range_end:; response_count:1; response_revision:1562; }","duration":"219.592913ms","start":"2024-04-22T10:42:25.45197Z","end":"2024-04-22T10:42:25.671563Z","steps":["trace[1084977975] 'agreement among raft nodes before linearized reading'  (duration: 219.387997ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T10:42:25.672257Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.820562ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6861"}
	{"level":"info","ts":"2024-04-22T10:42:25.672288Z","caller":"traceutil/trace.go:171","msg":"trace[895097194] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1562; }","duration":"131.874499ms","start":"2024-04-22T10:42:25.540404Z","end":"2024-04-22T10:42:25.672279Z","steps":["trace[895097194] 'agreement among raft nodes before linearized reading'  (duration: 131.345061ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T10:43:19.734498Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.474785ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5517348271807919333 > lease_revoke:<id:4c918f056339f4a2>","response":"size:29"}
	
	
	==> gcp-auth [98ce745d0650b4b452c0bafed8251ffb60086e77a34f7a677a87af3eb5451dd6] <==
	2024/04/22 10:41:56 GCP Auth Webhook started!
	2024/04/22 10:41:58 Ready to marshal response ...
	2024/04/22 10:41:58 Ready to write response ...
	2024/04/22 10:42:02 Ready to marshal response ...
	2024/04/22 10:42:02 Ready to write response ...
	2024/04/22 10:42:08 Ready to marshal response ...
	2024/04/22 10:42:08 Ready to write response ...
	2024/04/22 10:42:15 Ready to marshal response ...
	2024/04/22 10:42:15 Ready to write response ...
	2024/04/22 10:42:15 Ready to marshal response ...
	2024/04/22 10:42:15 Ready to write response ...
	2024/04/22 10:42:27 Ready to marshal response ...
	2024/04/22 10:42:27 Ready to write response ...
	2024/04/22 10:42:29 Ready to marshal response ...
	2024/04/22 10:42:29 Ready to write response ...
	2024/04/22 10:42:34 Ready to marshal response ...
	2024/04/22 10:42:34 Ready to write response ...
	2024/04/22 10:42:36 Ready to marshal response ...
	2024/04/22 10:42:36 Ready to write response ...
	2024/04/22 10:42:36 Ready to marshal response ...
	2024/04/22 10:42:36 Ready to write response ...
	2024/04/22 10:42:36 Ready to marshal response ...
	2024/04/22 10:42:36 Ready to write response ...
	2024/04/22 10:44:57 Ready to marshal response ...
	2024/04/22 10:44:57 Ready to write response ...
	
	
	==> kernel <==
	 10:48:03 up 9 min,  0 users,  load average: 0.17, 0.55, 0.45
	Linux addons-649657 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [49a61e188d41b17b7cae7258b2e08215974cb51d7f7cb89893a9e4eb40fc5a3d] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0422 10:41:04.595077       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.190.243:443/apis/metrics.k8s.io/v1beta1: Get "https://10.101.190.243:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.101.190.243:443: connect: connection refused
	E0422 10:41:04.600511       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.190.243:443/apis/metrics.k8s.io/v1beta1: Get "https://10.101.190.243:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.101.190.243:443: connect: connection refused
	I0422 10:41:04.681258       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0422 10:42:13.738339       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0422 10:42:18.784548       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0422 10:42:19.874743       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0422 10:42:20.046694       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"gadget\" not found]"
	I0422 10:42:34.308356       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0422 10:42:34.539119       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.158.140"}
	I0422 10:42:36.806521       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.1.128"}
	I0422 10:42:47.717715       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0422 10:42:47.717749       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0422 10:42:47.752376       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0422 10:42:47.752479       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0422 10:42:47.758532       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0422 10:42:47.758601       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0422 10:42:47.765186       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0422 10:42:47.765266       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0422 10:42:47.828660       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0422 10:42:47.828736       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0422 10:42:48.759247       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0422 10:42:48.829584       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0422 10:42:48.850043       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0422 10:44:58.101394       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.207.48"}
	
	
	==> kube-controller-manager [e87c8e5071b3101013333015fc0e2d11e262168ef3ae336c3da95c8911871553] <==
	W0422 10:45:59.019753       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 10:45:59.020020       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 10:46:01.504598       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 10:46:01.504815       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 10:46:27.968031       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 10:46:27.968303       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 10:46:29.078855       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 10:46:29.078940       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 10:46:46.356167       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 10:46:46.356275       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 10:46:50.088249       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 10:46:50.088413       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 10:47:03.669994       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 10:47:03.670192       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 10:47:22.162710       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 10:47:22.162828       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 10:47:37.577445       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 10:47:37.577539       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 10:47:41.032113       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 10:47:41.032291       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 10:47:45.504160       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 10:47:45.504189       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 10:47:58.019114       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 10:47:58.019147       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0422 10:48:01.869516       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="9.851µs"
	
	
	==> kube-proxy [cc4ae3f334eb8f15a02ce1cb74c938edda287420283c1625060ec6de34223cfc] <==
	I0422 10:39:21.152437       1 server_linux.go:69] "Using iptables proxy"
	I0422 10:39:21.171510       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.194"]
	I0422 10:39:21.280064       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 10:39:21.280132       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 10:39:21.280150       1 server_linux.go:165] "Using iptables Proxier"
	I0422 10:39:21.286056       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 10:39:21.286224       1 server.go:872] "Version info" version="v1.30.0"
	I0422 10:39:21.286263       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 10:39:21.288075       1 config.go:192] "Starting service config controller"
	I0422 10:39:21.288090       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 10:39:21.288107       1 config.go:101] "Starting endpoint slice config controller"
	I0422 10:39:21.288111       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 10:39:21.289966       1 config.go:319] "Starting node config controller"
	I0422 10:39:21.289975       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 10:39:21.388622       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0422 10:39:21.388667       1 shared_informer.go:320] Caches are synced for service config
	I0422 10:39:21.390001       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [228b3107896998ed67a5c41465c19156bb68c40d0b7d32997369f4ceea0e9199] <==
	E0422 10:39:02.515981       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0422 10:39:02.516055       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0422 10:39:02.516738       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0422 10:39:02.517181       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0422 10:39:02.517349       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0422 10:39:02.517475       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0422 10:39:02.517591       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0422 10:39:02.517723       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0422 10:39:03.391483       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0422 10:39:03.391527       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0422 10:39:03.391496       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0422 10:39:03.391550       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0422 10:39:03.521351       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0422 10:39:03.521410       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0422 10:39:03.697672       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0422 10:39:03.697886       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0422 10:39:03.698616       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0422 10:39:03.698663       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0422 10:39:03.837590       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0422 10:39:03.837689       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0422 10:39:03.841600       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0422 10:39:03.841681       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0422 10:39:04.061242       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0422 10:39:04.061511       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0422 10:39:06.602472       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 22 10:45:07 addons-649657 kubelet[1279]: I0422 10:45:07.272704    1279 scope.go:117] "RemoveContainer" containerID="8e53e4efa53590c6fe4278ba7f05a2a48f730509a6aad04790cbcc6f87279ce5"
	Apr 22 10:45:07 addons-649657 kubelet[1279]: I0422 10:45:07.297196    1279 scope.go:117] "RemoveContainer" containerID="bfe7b37b7911c734bb2cecd23824a6d0f9e7fc0597db799d84ae3fdbfae185a6"
	Apr 22 10:45:59 addons-649657 kubelet[1279]: I0422 10:45:59.408270    1279 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-7db6d8ff4d-2mxqp" secret="" err="secret \"gcp-auth\" not found"
	Apr 22 10:46:05 addons-649657 kubelet[1279]: E0422 10:46:05.452305    1279 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 10:46:05 addons-649657 kubelet[1279]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 10:46:05 addons-649657 kubelet[1279]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 10:46:05 addons-649657 kubelet[1279]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 10:46:05 addons-649657 kubelet[1279]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 10:47:05 addons-649657 kubelet[1279]: E0422 10:47:05.453445    1279 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 10:47:05 addons-649657 kubelet[1279]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 10:47:05 addons-649657 kubelet[1279]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 10:47:05 addons-649657 kubelet[1279]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 10:47:05 addons-649657 kubelet[1279]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 10:47:24 addons-649657 kubelet[1279]: I0422 10:47:24.409604    1279 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-7db6d8ff4d-2mxqp" secret="" err="secret \"gcp-auth\" not found"
	Apr 22 10:48:01 addons-649657 kubelet[1279]: I0422 10:48:01.896286    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-86c47465fc-wrvcz" podStartSLOduration=181.414506438 podStartE2EDuration="3m4.896231794s" podCreationTimestamp="2024-04-22 10:44:57 +0000 UTC" firstStartedPulling="2024-04-22 10:44:58.636443296 +0000 UTC m=+353.372699589" lastFinishedPulling="2024-04-22 10:45:02.11816865 +0000 UTC m=+356.854424945" observedRunningTime="2024-04-22 10:45:02.661438779 +0000 UTC m=+357.397695092" watchObservedRunningTime="2024-04-22 10:48:01.896231794 +0000 UTC m=+536.632488098"
	Apr 22 10:48:03 addons-649657 kubelet[1279]: I0422 10:48:03.295973    1279 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ce74ad1e-3a35-470e-962e-901dcdc84a6d-tmp-dir\") pod \"ce74ad1e-3a35-470e-962e-901dcdc84a6d\" (UID: \"ce74ad1e-3a35-470e-962e-901dcdc84a6d\") "
	Apr 22 10:48:03 addons-649657 kubelet[1279]: I0422 10:48:03.296045    1279 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm9tf\" (UniqueName: \"kubernetes.io/projected/ce74ad1e-3a35-470e-962e-901dcdc84a6d-kube-api-access-jm9tf\") pod \"ce74ad1e-3a35-470e-962e-901dcdc84a6d\" (UID: \"ce74ad1e-3a35-470e-962e-901dcdc84a6d\") "
	Apr 22 10:48:03 addons-649657 kubelet[1279]: I0422 10:48:03.296855    1279 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce74ad1e-3a35-470e-962e-901dcdc84a6d-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "ce74ad1e-3a35-470e-962e-901dcdc84a6d" (UID: "ce74ad1e-3a35-470e-962e-901dcdc84a6d"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Apr 22 10:48:03 addons-649657 kubelet[1279]: I0422 10:48:03.313073    1279 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce74ad1e-3a35-470e-962e-901dcdc84a6d-kube-api-access-jm9tf" (OuterVolumeSpecName: "kube-api-access-jm9tf") pod "ce74ad1e-3a35-470e-962e-901dcdc84a6d" (UID: "ce74ad1e-3a35-470e-962e-901dcdc84a6d"). InnerVolumeSpecName "kube-api-access-jm9tf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 22 10:48:03 addons-649657 kubelet[1279]: I0422 10:48:03.396670    1279 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ce74ad1e-3a35-470e-962e-901dcdc84a6d-tmp-dir\") on node \"addons-649657\" DevicePath \"\""
	Apr 22 10:48:03 addons-649657 kubelet[1279]: I0422 10:48:03.396696    1279 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-jm9tf\" (UniqueName: \"kubernetes.io/projected/ce74ad1e-3a35-470e-962e-901dcdc84a6d-kube-api-access-jm9tf\") on node \"addons-649657\" DevicePath \"\""
	Apr 22 10:48:03 addons-649657 kubelet[1279]: I0422 10:48:03.510616    1279 scope.go:117] "RemoveContainer" containerID="747bedb41a6513d6f0ed3d498d50de4156ef98e6ab9e372c254f10629802adfe"
	Apr 22 10:48:03 addons-649657 kubelet[1279]: I0422 10:48:03.570230    1279 scope.go:117] "RemoveContainer" containerID="747bedb41a6513d6f0ed3d498d50de4156ef98e6ab9e372c254f10629802adfe"
	Apr 22 10:48:03 addons-649657 kubelet[1279]: E0422 10:48:03.570901    1279 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"747bedb41a6513d6f0ed3d498d50de4156ef98e6ab9e372c254f10629802adfe\": container with ID starting with 747bedb41a6513d6f0ed3d498d50de4156ef98e6ab9e372c254f10629802adfe not found: ID does not exist" containerID="747bedb41a6513d6f0ed3d498d50de4156ef98e6ab9e372c254f10629802adfe"
	Apr 22 10:48:03 addons-649657 kubelet[1279]: I0422 10:48:03.570944    1279 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"747bedb41a6513d6f0ed3d498d50de4156ef98e6ab9e372c254f10629802adfe"} err="failed to get container status \"747bedb41a6513d6f0ed3d498d50de4156ef98e6ab9e372c254f10629802adfe\": rpc error: code = NotFound desc = could not find container \"747bedb41a6513d6f0ed3d498d50de4156ef98e6ab9e372c254f10629802adfe\": container with ID starting with 747bedb41a6513d6f0ed3d498d50de4156ef98e6ab9e372c254f10629802adfe not found: ID does not exist"
	
	
	==> storage-provisioner [dca8963cd05ebf8e5c2ab895728ad12bcce57a49e34622e946bd3d0130d46b17] <==
	I0422 10:39:26.741565       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0422 10:39:26.867175       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0422 10:39:26.884687       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0422 10:39:26.936432       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0422 10:39:26.950023       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-649657_98567884-08f1-4dd1-a87f-c9e2cb61138a!
	I0422 10:39:26.939941       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4aeef8a9-d3c9-4821-98b6-3a1ec921815c", APIVersion:"v1", ResourceVersion:"699", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-649657_98567884-08f1-4dd1-a87f-c9e2cb61138a became leader
	I0422 10:39:27.151028       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-649657_98567884-08f1-4dd1-a87f-c9e2cb61138a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-649657 -n addons-649657
helpers_test.go:261: (dbg) Run:  kubectl --context addons-649657 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (367.22s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.45s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-649657
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-649657: exit status 82 (2m0.478800244s)

                                                
                                                
-- stdout --
	* Stopping node "addons-649657"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-649657" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-649657
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-649657: exit status 11 (21.684990773s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.194:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-649657" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-649657
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-649657: exit status 11 (6.144491728s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.194:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-649657" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-649657
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-649657: exit status 11 (6.143162281s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.194:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-649657" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-668059 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-668059 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-668059 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-668059 --alsologtostderr -v=1] stderr:
I0422 11:01:33.480822   25765 out.go:291] Setting OutFile to fd 1 ...
I0422 11:01:33.481649   25765 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 11:01:33.481665   25765 out.go:304] Setting ErrFile to fd 2...
I0422 11:01:33.481673   25765 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 11:01:33.482066   25765 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
I0422 11:01:33.482418   25765 mustload.go:65] Loading cluster: functional-668059
I0422 11:01:33.482959   25765 config.go:182] Loaded profile config "functional-668059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0422 11:01:33.483610   25765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0422 11:01:33.483683   25765 main.go:141] libmachine: Launching plugin server for driver kvm2
I0422 11:01:33.498776   25765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43591
I0422 11:01:33.499307   25765 main.go:141] libmachine: () Calling .GetVersion
I0422 11:01:33.499948   25765 main.go:141] libmachine: Using API Version  1
I0422 11:01:33.499977   25765 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 11:01:33.500360   25765 main.go:141] libmachine: () Calling .GetMachineName
I0422 11:01:33.500545   25765 main.go:141] libmachine: (functional-668059) Calling .GetState
I0422 11:01:33.502126   25765 host.go:66] Checking if "functional-668059" exists ...
I0422 11:01:33.502513   25765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0422 11:01:33.502557   25765 main.go:141] libmachine: Launching plugin server for driver kvm2
I0422 11:01:33.517601   25765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34993
I0422 11:01:33.517972   25765 main.go:141] libmachine: () Calling .GetVersion
I0422 11:01:33.518430   25765 main.go:141] libmachine: Using API Version  1
I0422 11:01:33.518446   25765 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 11:01:33.518751   25765 main.go:141] libmachine: () Calling .GetMachineName
I0422 11:01:33.518934   25765 main.go:141] libmachine: (functional-668059) Calling .DriverName
I0422 11:01:33.519076   25765 api_server.go:166] Checking apiserver status ...
I0422 11:01:33.519114   25765 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0422 11:01:33.519170   25765 main.go:141] libmachine: (functional-668059) Calling .GetSSHHostname
I0422 11:01:33.522117   25765 main.go:141] libmachine: (functional-668059) DBG | domain functional-668059 has defined MAC address 52:54:00:0f:9a:cb in network mk-functional-668059
I0422 11:01:33.522563   25765 main.go:141] libmachine: (functional-668059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:cb", ip: ""} in network mk-functional-668059: {Iface:virbr1 ExpiryTime:2024-04-22 11:52:12 +0000 UTC Type:0 Mac:52:54:00:0f:9a:cb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:functional-668059 Clientid:01:52:54:00:0f:9a:cb}
I0422 11:01:33.522593   25765 main.go:141] libmachine: (functional-668059) DBG | domain functional-668059 has defined IP address 192.168.39.220 and MAC address 52:54:00:0f:9a:cb in network mk-functional-668059
I0422 11:01:33.522688   25765 main.go:141] libmachine: (functional-668059) Calling .GetSSHPort
I0422 11:01:33.522838   25765 main.go:141] libmachine: (functional-668059) Calling .GetSSHKeyPath
I0422 11:01:33.522986   25765 main.go:141] libmachine: (functional-668059) Calling .GetSSHUsername
I0422 11:01:33.523159   25765 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/functional-668059/id_rsa Username:docker}
I0422 11:01:33.707288   25765 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/7364/cgroup
W0422 11:01:33.754145   25765 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/7364/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0422 11:01:33.754220   25765 ssh_runner.go:195] Run: ls
I0422 11:01:33.769116   25765 api_server.go:253] Checking apiserver healthz at https://192.168.39.220:8441/healthz ...
I0422 11:01:33.774705   25765 api_server.go:279] https://192.168.39.220:8441/healthz returned 200:
ok
W0422 11:01:33.774751   25765 out.go:239] * Enabling dashboard ...
* Enabling dashboard ...
I0422 11:01:33.774962   25765 config.go:182] Loaded profile config "functional-668059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0422 11:01:33.774988   25765 addons.go:69] Setting dashboard=true in profile "functional-668059"
I0422 11:01:33.775000   25765 addons.go:234] Setting addon dashboard=true in "functional-668059"
I0422 11:01:33.775033   25765 host.go:66] Checking if "functional-668059" exists ...
I0422 11:01:33.775471   25765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0422 11:01:33.775517   25765 main.go:141] libmachine: Launching plugin server for driver kvm2
I0422 11:01:33.792640   25765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43401
I0422 11:01:33.793053   25765 main.go:141] libmachine: () Calling .GetVersion
I0422 11:01:33.793585   25765 main.go:141] libmachine: Using API Version  1
I0422 11:01:33.793608   25765 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 11:01:33.793920   25765 main.go:141] libmachine: () Calling .GetMachineName
I0422 11:01:33.794544   25765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0422 11:01:33.794582   25765 main.go:141] libmachine: Launching plugin server for driver kvm2
I0422 11:01:33.810347   25765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41977
I0422 11:01:33.810760   25765 main.go:141] libmachine: () Calling .GetVersion
I0422 11:01:33.811231   25765 main.go:141] libmachine: Using API Version  1
I0422 11:01:33.811251   25765 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 11:01:33.811584   25765 main.go:141] libmachine: () Calling .GetMachineName
I0422 11:01:33.811768   25765 main.go:141] libmachine: (functional-668059) Calling .GetState
I0422 11:01:33.813322   25765 main.go:141] libmachine: (functional-668059) Calling .DriverName
I0422 11:01:33.815897   25765 out.go:177]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0422 11:01:33.817676   25765 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0422 11:01:33.819389   25765 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0422 11:01:33.819408   25765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0422 11:01:33.819434   25765 main.go:141] libmachine: (functional-668059) Calling .GetSSHHostname
I0422 11:01:33.823080   25765 main.go:141] libmachine: (functional-668059) DBG | domain functional-668059 has defined MAC address 52:54:00:0f:9a:cb in network mk-functional-668059
I0422 11:01:33.823557   25765 main.go:141] libmachine: (functional-668059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:cb", ip: ""} in network mk-functional-668059: {Iface:virbr1 ExpiryTime:2024-04-22 11:52:12 +0000 UTC Type:0 Mac:52:54:00:0f:9a:cb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:functional-668059 Clientid:01:52:54:00:0f:9a:cb}
I0422 11:01:33.823580   25765 main.go:141] libmachine: (functional-668059) DBG | domain functional-668059 has defined IP address 192.168.39.220 and MAC address 52:54:00:0f:9a:cb in network mk-functional-668059
I0422 11:01:33.823792   25765 main.go:141] libmachine: (functional-668059) Calling .GetSSHPort
I0422 11:01:33.824057   25765 main.go:141] libmachine: (functional-668059) Calling .GetSSHKeyPath
I0422 11:01:33.824253   25765 main.go:141] libmachine: (functional-668059) Calling .GetSSHUsername
I0422 11:01:33.824427   25765 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/functional-668059/id_rsa Username:docker}
I0422 11:01:33.975744   25765 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0422 11:01:33.975788   25765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0422 11:01:34.023601   25765 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0422 11:01:34.023633   25765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0422 11:01:34.062111   25765 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0422 11:01:34.062135   25765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0422 11:01:34.097964   25765 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0422 11:01:34.097985   25765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0422 11:01:34.202229   25765 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
I0422 11:01:34.202252   25765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0422 11:01:34.251309   25765 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0422 11:01:34.251331   25765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0422 11:01:34.290818   25765 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0422 11:01:34.290838   25765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0422 11:01:34.325764   25765 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0422 11:01:34.325802   25765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0422 11:01:34.376725   25765 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0422 11:01:34.376756   25765 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0422 11:01:34.443279   25765 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0422 11:01:35.890555   25765 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.447214967s)
I0422 11:01:35.890620   25765 main.go:141] libmachine: Making call to close driver server
I0422 11:01:35.890638   25765 main.go:141] libmachine: (functional-668059) Calling .Close
I0422 11:01:35.890934   25765 main.go:141] libmachine: Successfully made call to close driver server
I0422 11:01:35.890958   25765 main.go:141] libmachine: Making call to close connection to plugin binary
I0422 11:01:35.890968   25765 main.go:141] libmachine: Making call to close driver server
I0422 11:01:35.890980   25765 main.go:141] libmachine: (functional-668059) Calling .Close
I0422 11:01:35.891180   25765 main.go:141] libmachine: Successfully made call to close driver server
I0422 11:01:35.891207   25765 main.go:141] libmachine: Making call to close connection to plugin binary
I0422 11:01:35.893306   25765 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-668059 addons enable metrics-server

                                                
                                                
I0422 11:01:35.894817   25765 addons.go:197] Writing out "functional-668059" config to set dashboard=true...
W0422 11:01:35.895094   25765 out.go:239] * Verifying dashboard health ...
* Verifying dashboard health ...
I0422 11:01:35.896154   25765 kapi.go:59] client config for functional-668059: &rest.Config{Host:"https://192.168.39.220:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt", KeyFile:"/home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.key", CAFile:"/home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0422 11:01:35.906531   25765 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  d0fe74ad-7b30-4c8c-9188-cb32d52b5d52 592 0 2024-04-22 11:01:35 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2024-04-22 11:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.100.131.44,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.100.131.44],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0422 11:01:35.906652   25765 out.go:239] * Launching proxy ...
* Launching proxy ...
I0422 11:01:35.906723   25765 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-668059 proxy --port 36195]
I0422 11:01:35.906975   25765 dashboard.go:157] Waiting for kubectl to output host:port ...
I0422 11:01:35.949678   25765 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0422 11:01:35.949713   25765 out.go:239] * Verifying proxy health ...
* Verifying proxy health ...
I0422 11:01:35.969762   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fa10b3df-f45c-4eab-b713-aeae22034780] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:01:35 GMT]] Body:0xc00232e040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0022cafc0 TLS:<nil>}
I0422 11:01:35.969874   25765 retry.go:31] will retry after 55.402µs: Temporary Error: unexpected response code: 503
I0422 11:01:35.974497   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d454918d-d202-428b-9c78-f93ae1f1f1d8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:01:35 GMT]] Body:0xc002392680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00233c000 TLS:<nil>}
I0422 11:01:35.974550   25765 retry.go:31] will retry after 133.745µs: Temporary Error: unexpected response code: 503
I0422 11:01:35.977900   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[88a52b4b-1143-4f75-9220-8437e01f0eb2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:01:35 GMT]] Body:0xc0022e0840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0023bd320 TLS:<nil>}
I0422 11:01:35.977963   25765 retry.go:31] will retry after 192.482µs: Temporary Error: unexpected response code: 503
I0422 11:01:35.981614   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fc4aa636-3142-4a81-9f0d-42ee60c850b5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:01:35 GMT]] Body:0xc0023927c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0022cb320 TLS:<nil>}
I0422 11:01:35.981653   25765 retry.go:31] will retry after 257.482µs: Temporary Error: unexpected response code: 503
I0422 11:01:35.986514   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ac609880-7681-477e-8d87-f8abf44239a9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:01:35 GMT]] Body:0xc00232e140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0023bd560 TLS:<nil>}
I0422 11:01:35.986554   25765 retry.go:31] will retry after 696.346µs: Temporary Error: unexpected response code: 503
I0422 11:01:35.990001   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6ad2dde0-26c8-4b49-bfd7-85e0ccafa058] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:01:35 GMT]] Body:0xc002392900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00233c240 TLS:<nil>}
I0422 11:01:35.990054   25765 retry.go:31] will retry after 777.141µs: Temporary Error: unexpected response code: 503
I0422 11:01:35.993876   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[40b65121-9a3d-4b1b-986e-596344e062bf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:01:35 GMT]] Body:0xc00232e240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0023bd7a0 TLS:<nil>}
I0422 11:01:35.993926   25765 retry.go:31] will retry after 626.098µs: Temporary Error: unexpected response code: 503
I0422 11:01:35.998351   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8a2352ed-79fb-4533-8c19-7b11e18957c6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:01:35 GMT]] Body:0xc0022e09c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00233c480 TLS:<nil>}
I0422 11:01:35.998400   25765 retry.go:31] will retry after 1.035794ms: Temporary Error: unexpected response code: 503
I0422 11:01:36.002489   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b44db71b-8113-4194-8d6e-6e912a3a731c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:01:35 GMT]] Body:0xc0022e0b00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0022cb560 TLS:<nil>}
I0422 11:01:36.002536   25765 retry.go:31] will retry after 3.02549ms: Temporary Error: unexpected response code: 503
I0422 11:01:36.008031   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9c99d471-2e1c-4c61-b1fe-dcb8453782a7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:01:35 GMT]] Body:0xc002392a40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0022cb8c0 TLS:<nil>}
I0422 11:01:36.008081   25765 retry.go:31] will retry after 4.590671ms: Temporary Error: unexpected response code: 503
I0422 11:01:36.015714   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[10b90f66-df51-48e4-9f35-9e2d3fc2b03d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:01:35 GMT]] Body:0xc0022e0c40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0023bd9e0 TLS:<nil>}
I0422 11:01:36.015768   25765 retry.go:31] will retry after 6.764952ms: Temporary Error: unexpected response code: 503
I0422 11:01:36.025759   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0488237d-153b-4874-8859-d1143b9d060b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:01:36 GMT]] Body:0xc002392bc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0022cbb00 TLS:<nil>}
I0422 11:01:36.025806   25765 retry.go:31] will retry after 11.407755ms: Temporary Error: unexpected response code: 503
I0422 11:01:36.042267   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1d190133-ea29-40be-8abc-8630dabcfd16] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:01:36 GMT]] Body:0xc002392cc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0023bdc20 TLS:<nil>}
I0422 11:01:36.042327   25765 retry.go:31] will retry after 18.261181ms: Temporary Error: unexpected response code: 503
I0422 11:01:36.065203   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[419eac8c-ab4a-438c-9fb9-9da5516848da] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:01:36 GMT]] Body:0xc0022e0d40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0023bde60 TLS:<nil>}
I0422 11:01:36.065272   25765 retry.go:31] will retry after 23.143119ms: Temporary Error: unexpected response code: 503
I0422 11:01:36.092701   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1adb497e-5809-4b93-ac8b-ad75058b4107] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:01:36 GMT]] Body:0xc00232e3c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0022cbd40 TLS:<nil>}
I0422 11:01:36.092759   25765 retry.go:31] will retry after 15.330448ms: Temporary Error: unexpected response code: 503
I0422 11:01:36.112122   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5c406724-4f9e-463a-b76c-1719de064640] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:01:36 GMT]] Body:0xc0022e0e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00233c6c0 TLS:<nil>}
I0422 11:01:36.112177   25765 retry.go:31] will retry after 46.438773ms: Temporary Error: unexpected response code: 503
I0422 11:01:36.165005   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ce47e18c-dac2-4dcf-b4f4-39ca9a465129] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:01:36 GMT]] Body:0xc002392e40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc002488000 TLS:<nil>}
I0422 11:01:36.165077   25765 retry.go:31] will retry after 89.315995ms: Temporary Error: unexpected response code: 503
I0422 11:01:36.258702   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1ea851d1-1336-4628-8d40-1bec76dc6a5c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:01:36 GMT]] Body:0xc0022e0fc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc002440120 TLS:<nil>}
I0422 11:01:36.258793   25765 retry.go:31] will retry after 92.868204ms: Temporary Error: unexpected response code: 503
I0422 11:01:36.356562   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[307d126d-da00-4656-a203-10198907120c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:01:36 GMT]] Body:0xc00232e500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc002488240 TLS:<nil>}
I0422 11:01:36.356632   25765 retry.go:31] will retry after 216.285876ms: Temporary Error: unexpected response code: 503
I0422 11:01:36.576713   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[de2cfb96-f5c5-460a-b518-2379fda95eb3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:01:36 GMT]] Body:0xc002392f40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00233c900 TLS:<nil>}
I0422 11:01:36.576805   25765 retry.go:31] will retry after 216.097952ms: Temporary Error: unexpected response code: 503
I0422 11:01:36.797594   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7ccf39b9-8e67-4a4b-a306-12e64b50b631] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:01:36 GMT]] Body:0xc00232e640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc002440360 TLS:<nil>}
I0422 11:01:36.797668   25765 retry.go:31] will retry after 357.921551ms: Temporary Error: unexpected response code: 503
I0422 11:01:37.162271   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4505f718-9c5c-4d2e-b82d-89b905212933] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:01:37 GMT]] Body:0xc0020248c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00233cb40 TLS:<nil>}
I0422 11:01:37.162352   25765 retry.go:31] will retry after 253.767147ms: Temporary Error: unexpected response code: 503
I0422 11:01:37.419897   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9b1fd155-a40b-4121-8bf2-932bc23a0dfe] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:01:37 GMT]] Body:0xc00232e700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00029a5a0 TLS:<nil>}
I0422 11:01:37.419957   25765 retry.go:31] will retry after 460.846067ms: Temporary Error: unexpected response code: 503
I0422 11:01:37.884763   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[19015d0f-2167-421d-a388-1998f4129721] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:01:37 GMT]] Body:0xc002024a00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00233cd80 TLS:<nil>}
I0422 11:01:37.884856   25765 retry.go:31] will retry after 1.406765782s: Temporary Error: unexpected response code: 503
I0422 11:01:39.295563   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[729e39e4-22be-4467-8dc5-ceea280b843e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:01:39 GMT]] Body:0xc0022e1240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc002488480 TLS:<nil>}
I0422 11:01:39.295645   25765 retry.go:31] will retry after 1.103036199s: Temporary Error: unexpected response code: 503
I0422 11:01:40.541434   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[df1a8122-9af1-4fe4-8bd0-448e9c1dc93f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:01:40 GMT]] Body:0xc00232e840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0024886c0 TLS:<nil>}
I0422 11:01:40.541516   25765 retry.go:31] will retry after 3.639084782s: Temporary Error: unexpected response code: 503
I0422 11:01:44.184004   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[61f741cd-6025-453a-806f-788e461a935d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:01:44 GMT]] Body:0xc0022e1300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00029a900 TLS:<nil>}
I0422 11:01:44.184074   25765 retry.go:31] will retry after 2.110690853s: Temporary Error: unexpected response code: 503
I0422 11:01:46.856408   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ce87bd48-ee76-4deb-ad96-912d8b0520c4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:01:46 GMT]] Body:0xc0022e1380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00029ab40 TLS:<nil>}
I0422 11:01:46.856486   25765 retry.go:31] will retry after 6.80794683s: Temporary Error: unexpected response code: 503
I0422 11:01:53.822733   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[562896a8-d2c7-4c17-985d-eae19322835a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:01:53 GMT]] Body:0xc0022e1480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0024887e0 TLS:<nil>}
I0422 11:01:53.822809   25765 retry.go:31] will retry after 7.441204396s: Temporary Error: unexpected response code: 503
I0422 11:02:01.269004   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3059ef0e-afb4-40c7-87f4-2ce7f31ab4d1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:02:01 GMT]] Body:0xc0022e1500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00233cfc0 TLS:<nil>}
I0422 11:02:01.269077   25765 retry.go:31] will retry after 7.9734629s: Temporary Error: unexpected response code: 503
I0422 11:02:09.246096   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[77beb18a-6ae5-4b8b-a15e-aac4ac58f7c4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:02:09 GMT]] Body:0xc00232e9c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc002488a20 TLS:<nil>}
I0422 11:02:09.246166   25765 retry.go:31] will retry after 16.564358245s: Temporary Error: unexpected response code: 503
I0422 11:02:25.814797   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6990c14a-fa5d-4357-94ce-ba42f68c142f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:02:25 GMT]] Body:0xc0022e1640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00233d200 TLS:<nil>}
I0422 11:02:25.814859   25765 retry.go:31] will retry after 23.623097192s: Temporary Error: unexpected response code: 503
I0422 11:02:49.442851   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[72f87f15-04ec-4042-b453-3c7845485e7b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:02:49 GMT]] Body:0xc0022e16c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0024405a0 TLS:<nil>}
I0422 11:02:49.442916   25765 retry.go:31] will retry after 21.694152263s: Temporary Error: unexpected response code: 503
I0422 11:03:11.142620   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[16667e12-a1d1-42ab-a01c-4e2207bb3d8b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:03:11 GMT]] Body:0xc002393140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc002488c60 TLS:<nil>}
I0422 11:03:11.142684   25765 retry.go:31] will retry after 31.851820608s: Temporary Error: unexpected response code: 503
I0422 11:03:43.001823   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f434c9e2-67ca-4232-9d8f-cc32428a7ac2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:03:42 GMT]] Body:0xc000cfe0c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0024407e0 TLS:<nil>}
I0422 11:03:43.001895   25765 retry.go:31] will retry after 1m19.565780937s: Temporary Error: unexpected response code: 503
I0422 11:05:02.572547   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[53639727-19a0-472c-b080-f3178aa41ca7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:05:02 GMT]] Body:0xc002392100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00029a120 TLS:<nil>}
I0422 11:05:02.572614   25765 retry.go:31] will retry after 37.629049686s: Temporary Error: unexpected response code: 503
I0422 11:05:40.206127   25765 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4dfc723d-84cb-4cd1-8b1f-9bba963e9df0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 11:05:40 GMT]] Body:0xc002392040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00029ad80 TLS:<nil>}
I0422 11:05:40.206205   25765 retry.go:31] will retry after 1m14.828486416s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-668059 -n functional-668059
helpers_test.go:244: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-668059 logs -n 25: (1.383221718s)
helpers_test.go:252: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	|----------------|---------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                    |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|---------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-668059 ssh findmnt                                             | functional-668059 | jenkins | v1.33.0 | 22 Apr 24 11:01 UTC | 22 Apr 24 11:01 UTC |
	|                | -T /mount3                                                                |                   |         |         |                     |                     |
	| mount          | -p functional-668059                                                      | functional-668059 | jenkins | v1.33.0 | 22 Apr 24 11:01 UTC |                     |
	|                | --kill=true                                                               |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                                                        | functional-668059 | jenkins | v1.33.0 | 22 Apr 24 11:01 UTC |                     |
	|                | -p functional-668059                                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                   |         |         |                     |                     |
	| image          | functional-668059 image load --daemon                                     | functional-668059 | jenkins | v1.33.0 | 22 Apr 24 11:01 UTC | 22 Apr 24 11:01 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-668059                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	| image          | functional-668059 image ls                                                | functional-668059 | jenkins | v1.33.0 | 22 Apr 24 11:01 UTC | 22 Apr 24 11:01 UTC |
	| image          | functional-668059 image load --daemon                                     | functional-668059 | jenkins | v1.33.0 | 22 Apr 24 11:01 UTC | 22 Apr 24 11:01 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-668059                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	| image          | functional-668059 image ls                                                | functional-668059 | jenkins | v1.33.0 | 22 Apr 24 11:01 UTC | 22 Apr 24 11:01 UTC |
	| image          | functional-668059 image load --daemon                                     | functional-668059 | jenkins | v1.33.0 | 22 Apr 24 11:01 UTC | 22 Apr 24 11:01 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-668059                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	| image          | functional-668059 image ls                                                | functional-668059 | jenkins | v1.33.0 | 22 Apr 24 11:01 UTC | 22 Apr 24 11:01 UTC |
	| image          | functional-668059 image save                                              | functional-668059 | jenkins | v1.33.0 | 22 Apr 24 11:01 UTC | 22 Apr 24 11:01 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-668059                  |                   |         |         |                     |                     |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	| image          | functional-668059 image rm                                                | functional-668059 | jenkins | v1.33.0 | 22 Apr 24 11:01 UTC | 22 Apr 24 11:01 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-668059                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	| image          | functional-668059 image ls                                                | functional-668059 | jenkins | v1.33.0 | 22 Apr 24 11:01 UTC | 22 Apr 24 11:01 UTC |
	| image          | functional-668059 image load                                              | functional-668059 | jenkins | v1.33.0 | 22 Apr 24 11:01 UTC | 22 Apr 24 11:02 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	| update-context | functional-668059                                                         | functional-668059 | jenkins | v1.33.0 | 22 Apr 24 11:02 UTC | 22 Apr 24 11:02 UTC |
	|                | update-context                                                            |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                   |         |         |                     |                     |
	| update-context | functional-668059                                                         | functional-668059 | jenkins | v1.33.0 | 22 Apr 24 11:02 UTC | 22 Apr 24 11:02 UTC |
	|                | update-context                                                            |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                   |         |         |                     |                     |
	| image          | functional-668059 image ls                                                | functional-668059 | jenkins | v1.33.0 | 22 Apr 24 11:02 UTC | 22 Apr 24 11:02 UTC |
	| update-context | functional-668059                                                         | functional-668059 | jenkins | v1.33.0 | 22 Apr 24 11:02 UTC | 22 Apr 24 11:02 UTC |
	|                | update-context                                                            |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                   |         |         |                     |                     |
	| image          | functional-668059 image save --daemon                                     | functional-668059 | jenkins | v1.33.0 | 22 Apr 24 11:02 UTC | 22 Apr 24 11:02 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-668059                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	| image          | functional-668059                                                         | functional-668059 | jenkins | v1.33.0 | 22 Apr 24 11:02 UTC | 22 Apr 24 11:02 UTC |
	|                | image ls --format json                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	| image          | functional-668059                                                         | functional-668059 | jenkins | v1.33.0 | 22 Apr 24 11:02 UTC |                     |
	|                | image ls --format yaml                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	| image          | functional-668059                                                         | functional-668059 | jenkins | v1.33.0 | 22 Apr 24 11:02 UTC | 22 Apr 24 11:02 UTC |
	|                | image ls --format short                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	| image          | functional-668059                                                         | functional-668059 | jenkins | v1.33.0 | 22 Apr 24 11:02 UTC | 22 Apr 24 11:02 UTC |
	|                | image ls --format table                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	| ssh            | functional-668059 ssh pgrep                                               | functional-668059 | jenkins | v1.33.0 | 22 Apr 24 11:02 UTC |                     |
	|                | buildkitd                                                                 |                   |         |         |                     |                     |
	| image          | functional-668059 image build -t                                          | functional-668059 | jenkins | v1.33.0 | 22 Apr 24 11:02 UTC | 22 Apr 24 11:02 UTC |
	|                | localhost/my-image:functional-668059                                      |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                          |                   |         |         |                     |                     |
	| image          | functional-668059 image ls                                                | functional-668059 | jenkins | v1.33.0 | 22 Apr 24 11:02 UTC | 22 Apr 24 11:02 UTC |
	|----------------|---------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 11:01:30
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 11:01:30.214826   25085 out.go:291] Setting OutFile to fd 1 ...
	I0422 11:01:30.215038   25085 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:01:30.215073   25085 out.go:304] Setting ErrFile to fd 2...
	I0422 11:01:30.215089   25085 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:01:30.215679   25085 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
	I0422 11:01:30.216786   25085 out.go:298] Setting JSON to false
	I0422 11:01:30.217967   25085 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":2633,"bootTime":1713781057,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 11:01:30.218050   25085 start.go:139] virtualization: kvm guest
	I0422 11:01:30.224814   25085 out.go:177] * [functional-668059] minikube v1.33.0 sur Ubuntu 20.04 (kvm/amd64)
	I0422 11:01:30.226517   25085 notify.go:220] Checking for updates...
	I0422 11:01:30.226525   25085 out.go:177]   - MINIKUBE_LOCATION=18711
	I0422 11:01:30.228100   25085 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 11:01:30.229632   25085 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18711-7633/kubeconfig
	I0422 11:01:30.231087   25085 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 11:01:30.232573   25085 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 11:01:30.234040   25085 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 11:01:30.236171   25085 config.go:182] Loaded profile config "functional-668059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:01:30.236764   25085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:01:30.236862   25085 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:01:30.256167   25085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35671
	I0422 11:01:30.256596   25085 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:01:30.257335   25085 main.go:141] libmachine: Using API Version  1
	I0422 11:01:30.257357   25085 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:01:30.257757   25085 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:01:30.257967   25085 main.go:141] libmachine: (functional-668059) Calling .DriverName
	I0422 11:01:30.258232   25085 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 11:01:30.258657   25085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:01:30.258708   25085 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:01:30.273793   25085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37273
	I0422 11:01:30.274346   25085 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:01:30.275014   25085 main.go:141] libmachine: Using API Version  1
	I0422 11:01:30.275039   25085 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:01:30.275404   25085 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:01:30.275627   25085 main.go:141] libmachine: (functional-668059) Calling .DriverName
	I0422 11:01:30.310511   25085 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0422 11:01:30.312187   25085 start.go:297] selected driver: kvm2
	I0422 11:01:30.312200   25085 start.go:901] validating driver "kvm2" against &{Name:functional-668059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterNa
me:functional-668059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 11:01:30.312304   25085 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 11:01:30.314662   25085 out.go:177] 
	W0422 11:01:30.316351   25085 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0422 11:01:30.317781   25085 out.go:177] 
	
	
	==> CRI-O <==
	Apr 22 11:06:34 functional-668059 crio[4895]: time="2024-04-22 11:06:34.307023623Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b34d8185-f1c5-42b3-bab6-70643a72bce8 name=/runtime.v1.RuntimeService/Version
	Apr 22 11:06:34 functional-668059 crio[4895]: time="2024-04-22 11:06:34.308553631Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6fc26667-c56d-41d5-9212-9342f6791522 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:06:34 functional-668059 crio[4895]: time="2024-04-22 11:06:34.309467365Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713783994309439078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:268608,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6fc26667-c56d-41d5-9212-9342f6791522 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:06:34 functional-668059 crio[4895]: time="2024-04-22 11:06:34.310045185Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a48f7e5a-eabf-4a94-972c-cf97fb523df5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:06:34 functional-668059 crio[4895]: time="2024-04-22 11:06:34.310140450Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a48f7e5a-eabf-4a94-972c-cf97fb523df5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:06:34 functional-668059 crio[4895]: time="2024-04-22 11:06:34.310609900Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:95aecaf18b3888858ca58617956536eeda9d6ca44eccab15c607cc0268672597,PodSandboxId:3cfeff0fffca780c2b211977e3d63a452c71f935062b13124325dc9bcfc27ad4,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0463a96ac74b84a8a1b27f3d1f4ae5d1a70ea823219394e131f5bf3536674419,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2ac752d7aeb1d9281f708e7c51501c41baf90de15ffc9bca7c5d38b8da41b580,State:CONTAINER_RUNNING,CreatedAt:1713783715062663231,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b28a4a6f-1613-4ede-9ef7-9ce0d306ae0b,},Annotations:map[string]string{io.kubernetes.container.hash: 6e5a261e,io.kubernetes.container.restartCount: 0,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a592bd33227f08310778e93b954e5e71c99e0485203c3b13fbc03aa7f5676df5,PodSandboxId:1c760c0860f20e696acaaeda261f7a3356ceec47b8215cc7e001a79527c55698,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1713783713983809778,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-b5fc48f67-ddg72,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 04d08cf7-e475-4cf9-82c3-48152255f019,},Annotations:map[string]string{io.kubernetes.container
.hash: 94ede15,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ea71d86fc69c618608b9b9e1a75416bfaa65b9232953b6dff9c9cce22e89c5,PodSandboxId:dd7b61237f6ca52764b3aa5dd6656689979d564435fc2cf6aa035a914cb69ea1,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1713783710335012369,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-64454c8b5c-78smp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0a14d820-ef5d-443b-9c56-
4bd9150647c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7d4d4b81,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b10d9cd58e83965c7b248388acf0b293bcafe49f19495b8847a695ab2259ea,PodSandboxId:21ed82f51c9a57ea19146ce50b72a30f81b07c399af3e1ba98fa1467622fb99f,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1713783685347002244,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: bu
sybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e74d2ed-5b70-476e-82ca-8cfc199a0fe2,},Annotations:map[string]string{io.kubernetes.container.hash: 7a9fdc06,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fe852ccb6f24f6cec31309e97fed419d6975c8ea2621232ecc20c6651533dc2,PodSandboxId:948eecd026da18cfd1e748483ab6d66ead896c9f3679ac22f20b591a5df64dbb,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1713783682426754811,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-no
de-connect-57b4589c47-fwc4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e20b825-ece8-407b-8cf5-d0ee1251f79d,},Annotations:map[string]string{io.kubernetes.container.hash: 978975ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8279c1c6c383b97f373e8cce465ec223d7dce8c1827e557042e3bc02dfeca0d7,PodSandboxId:979d0b4f9b989fee00579747ad46e520fd19d607dbed11d343abaec53ad58bb3,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1713783682329478925,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.po
d.name: hello-node-6d85cfcfd8-2fmdd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e141bfa-a355-4da3-aaa2-59333c8f91ed,},Annotations:map[string]string{io.kubernetes.container.hash: caac7018,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd9027e93b686b13cbaf7f1e7b4e0bb5f72a09830debcea872f264b19a08d9bc,PodSandboxId:0d56e7329b0a664d358052a5bf4423364a4a8ab43c570a1340d2275acafc5a02,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713783667681682059,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name:
storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcaa2046-0f49-4a0b-a444-81c8e4daf200,},Annotations:map[string]string{io.kubernetes.container.hash: bdeb952f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d5f416ed1beeadddf5bef2098654435bfa5ae91c3e7da6b8c454e603ab6a3e,PodSandboxId:eea098d4498332a7581f274f401d67972c86253f4e61e6ef36169dc27b3f2a95,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713783667234594140,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wnvsz,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: f776bc8c-09b4-4ce2-a5f5-cff1d2aeb0ad,},Annotations:map[string]string{io.kubernetes.container.hash: 84ff5ded,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf7c1c7fc529024308fd4d5a0ce7ffdac74f093bea377109f6f595177e84166f,PodSandboxId:8fcf586c47e1bbf5ed4d6498e34f4434e8c8a1f852a8fc30f0971262cbb856ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00
797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713783667152176522,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fs679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3208911-4a95-4246-9433-63c5666433b9,},Annotations:map[string]string{io.kubernetes.container.hash: f71d4e4e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87bc9d56e78f5bc4a7df46ac6bf7f07304cd09b461635bdb412bfff32b7fd8cd,PodSandboxId:00bd237d2b2e72c32fbee3536ea6d8a786b59d926bf91936be824a91ed334a52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec
{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713783666458206014,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66qb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5779638b-ad27-4440-8b85-2d9496164b2c,},Annotations:map[string]string{io.kubernetes.container.hash: f3998d25,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63cb00927ff1ff4352da0ec907eaec9c696ce08173dcd5da2df5f99730c96f8,PodSandboxId:298c881132b7eed127503a0316524a97e613c2709ae2bed14f14fea3f1fe827d,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487
248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713783646959804031,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-668059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a0044b40568eb66830be38c03827306,},Annotations:map[string]string{io.kubernetes.container.hash: 6a7e86d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4127049f9dda8a1735578b36b7495a09dec2eecc954afa8200b90d0e89d4ac07,PodSandboxId:176c575c0333c957c13032e9ade8cc2b3a1975661814ca19f25193b73ec67b1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,A
nnotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713783646892000550,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-668059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d054316854da035cae7d8a9a9aa36d69,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68c95cb64f2922aa63a9a2179923dc524488581a2f37550ece4792c1a2f5babd,PodSandboxId:6b7bf00e464536884dc526025e67c095f6132811e4e501e37efbe15912c4561f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:ma
p[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713783646859179735,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-668059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bb7470b499382523cf51c9065d1a75c,},Annotations:map[string]string{io.kubernetes.container.hash: a663e06c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a12c0714c8a6b8530ddc53ea5aefe8393eb666d15b87b3aee9206aa0f4867373,PodSandboxId:ca41a3c9c188d1aab805031dbf673e543402f1853e6d29715aeb606ed7bb92c0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713783646781731239,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-668059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c12ce4a4aa6b43f2cabea68b384499,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a48f7e5a-eabf-4a94-972c-cf97fb523df5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:06:34 functional-668059 crio[4895]: time="2024-04-22 11:06:34.356092996Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b78bfd33-966e-4e7b-830d-814c5e535333 name=/runtime.v1.RuntimeService/Version
	Apr 22 11:06:34 functional-668059 crio[4895]: time="2024-04-22 11:06:34.356198763Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b78bfd33-966e-4e7b-830d-814c5e535333 name=/runtime.v1.RuntimeService/Version
	Apr 22 11:06:34 functional-668059 crio[4895]: time="2024-04-22 11:06:34.358016027Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c2d67b2-876d-4b9e-8816-c50d40279c07 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:06:34 functional-668059 crio[4895]: time="2024-04-22 11:06:34.359235370Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713783994359211988,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:268608,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c2d67b2-876d-4b9e-8816-c50d40279c07 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:06:34 functional-668059 crio[4895]: time="2024-04-22 11:06:34.360127874Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d019a7dc-8322-4d32-9929-959a48398de5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:06:34 functional-668059 crio[4895]: time="2024-04-22 11:06:34.360228884Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d019a7dc-8322-4d32-9929-959a48398de5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:06:34 functional-668059 crio[4895]: time="2024-04-22 11:06:34.360601849Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:95aecaf18b3888858ca58617956536eeda9d6ca44eccab15c607cc0268672597,PodSandboxId:3cfeff0fffca780c2b211977e3d63a452c71f935062b13124325dc9bcfc27ad4,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0463a96ac74b84a8a1b27f3d1f4ae5d1a70ea823219394e131f5bf3536674419,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2ac752d7aeb1d9281f708e7c51501c41baf90de15ffc9bca7c5d38b8da41b580,State:CONTAINER_RUNNING,CreatedAt:1713783715062663231,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b28a4a6f-1613-4ede-9ef7-9ce0d306ae0b,},Annotations:map[string]string{io.kubernetes.container.hash: 6e5a261e,io.kubernetes.container.restartCount: 0,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a592bd33227f08310778e93b954e5e71c99e0485203c3b13fbc03aa7f5676df5,PodSandboxId:1c760c0860f20e696acaaeda261f7a3356ceec47b8215cc7e001a79527c55698,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1713783713983809778,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-b5fc48f67-ddg72,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 04d08cf7-e475-4cf9-82c3-48152255f019,},Annotations:map[string]string{io.kubernetes.container
.hash: 94ede15,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ea71d86fc69c618608b9b9e1a75416bfaa65b9232953b6dff9c9cce22e89c5,PodSandboxId:dd7b61237f6ca52764b3aa5dd6656689979d564435fc2cf6aa035a914cb69ea1,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1713783710335012369,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-64454c8b5c-78smp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0a14d820-ef5d-443b-9c56-
4bd9150647c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7d4d4b81,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b10d9cd58e83965c7b248388acf0b293bcafe49f19495b8847a695ab2259ea,PodSandboxId:21ed82f51c9a57ea19146ce50b72a30f81b07c399af3e1ba98fa1467622fb99f,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1713783685347002244,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: bu
sybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e74d2ed-5b70-476e-82ca-8cfc199a0fe2,},Annotations:map[string]string{io.kubernetes.container.hash: 7a9fdc06,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fe852ccb6f24f6cec31309e97fed419d6975c8ea2621232ecc20c6651533dc2,PodSandboxId:948eecd026da18cfd1e748483ab6d66ead896c9f3679ac22f20b591a5df64dbb,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1713783682426754811,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-no
de-connect-57b4589c47-fwc4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e20b825-ece8-407b-8cf5-d0ee1251f79d,},Annotations:map[string]string{io.kubernetes.container.hash: 978975ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8279c1c6c383b97f373e8cce465ec223d7dce8c1827e557042e3bc02dfeca0d7,PodSandboxId:979d0b4f9b989fee00579747ad46e520fd19d607dbed11d343abaec53ad58bb3,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1713783682329478925,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.po
d.name: hello-node-6d85cfcfd8-2fmdd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e141bfa-a355-4da3-aaa2-59333c8f91ed,},Annotations:map[string]string{io.kubernetes.container.hash: caac7018,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd9027e93b686b13cbaf7f1e7b4e0bb5f72a09830debcea872f264b19a08d9bc,PodSandboxId:0d56e7329b0a664d358052a5bf4423364a4a8ab43c570a1340d2275acafc5a02,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713783667681682059,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name:
storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcaa2046-0f49-4a0b-a444-81c8e4daf200,},Annotations:map[string]string{io.kubernetes.container.hash: bdeb952f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d5f416ed1beeadddf5bef2098654435bfa5ae91c3e7da6b8c454e603ab6a3e,PodSandboxId:eea098d4498332a7581f274f401d67972c86253f4e61e6ef36169dc27b3f2a95,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713783667234594140,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wnvsz,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: f776bc8c-09b4-4ce2-a5f5-cff1d2aeb0ad,},Annotations:map[string]string{io.kubernetes.container.hash: 84ff5ded,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf7c1c7fc529024308fd4d5a0ce7ffdac74f093bea377109f6f595177e84166f,PodSandboxId:8fcf586c47e1bbf5ed4d6498e34f4434e8c8a1f852a8fc30f0971262cbb856ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00
797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713783667152176522,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fs679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3208911-4a95-4246-9433-63c5666433b9,},Annotations:map[string]string{io.kubernetes.container.hash: f71d4e4e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87bc9d56e78f5bc4a7df46ac6bf7f07304cd09b461635bdb412bfff32b7fd8cd,PodSandboxId:00bd237d2b2e72c32fbee3536ea6d8a786b59d926bf91936be824a91ed334a52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec
{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713783666458206014,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66qb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5779638b-ad27-4440-8b85-2d9496164b2c,},Annotations:map[string]string{io.kubernetes.container.hash: f3998d25,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63cb00927ff1ff4352da0ec907eaec9c696ce08173dcd5da2df5f99730c96f8,PodSandboxId:298c881132b7eed127503a0316524a97e613c2709ae2bed14f14fea3f1fe827d,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487
248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713783646959804031,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-668059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a0044b40568eb66830be38c03827306,},Annotations:map[string]string{io.kubernetes.container.hash: 6a7e86d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4127049f9dda8a1735578b36b7495a09dec2eecc954afa8200b90d0e89d4ac07,PodSandboxId:176c575c0333c957c13032e9ade8cc2b3a1975661814ca19f25193b73ec67b1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,A
nnotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713783646892000550,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-668059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d054316854da035cae7d8a9a9aa36d69,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68c95cb64f2922aa63a9a2179923dc524488581a2f37550ece4792c1a2f5babd,PodSandboxId:6b7bf00e464536884dc526025e67c095f6132811e4e501e37efbe15912c4561f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:ma
p[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713783646859179735,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-668059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bb7470b499382523cf51c9065d1a75c,},Annotations:map[string]string{io.kubernetes.container.hash: a663e06c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a12c0714c8a6b8530ddc53ea5aefe8393eb666d15b87b3aee9206aa0f4867373,PodSandboxId:ca41a3c9c188d1aab805031dbf673e543402f1853e6d29715aeb606ed7bb92c0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713783646781731239,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-668059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c12ce4a4aa6b43f2cabea68b384499,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d019a7dc-8322-4d32-9929-959a48398de5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:06:34 functional-668059 crio[4895]: time="2024-04-22 11:06:34.401961742Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a34b279a-6e98-42ac-8e83-9c2eb413addb name=/runtime.v1.RuntimeService/Version
	Apr 22 11:06:34 functional-668059 crio[4895]: time="2024-04-22 11:06:34.402061091Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a34b279a-6e98-42ac-8e83-9c2eb413addb name=/runtime.v1.RuntimeService/Version
	Apr 22 11:06:34 functional-668059 crio[4895]: time="2024-04-22 11:06:34.403224177Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0011862e-b068-49a0-a915-7dfd7b23728e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:06:34 functional-668059 crio[4895]: time="2024-04-22 11:06:34.404178303Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713783994404145733,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:268608,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0011862e-b068-49a0-a915-7dfd7b23728e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:06:34 functional-668059 crio[4895]: time="2024-04-22 11:06:34.404687171Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ae918c6-d2eb-4d16-9c9b-80aa665f5086 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:06:34 functional-668059 crio[4895]: time="2024-04-22 11:06:34.404774834Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ae918c6-d2eb-4d16-9c9b-80aa665f5086 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:06:34 functional-668059 crio[4895]: time="2024-04-22 11:06:34.405108338Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:95aecaf18b3888858ca58617956536eeda9d6ca44eccab15c607cc0268672597,PodSandboxId:3cfeff0fffca780c2b211977e3d63a452c71f935062b13124325dc9bcfc27ad4,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0463a96ac74b84a8a1b27f3d1f4ae5d1a70ea823219394e131f5bf3536674419,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2ac752d7aeb1d9281f708e7c51501c41baf90de15ffc9bca7c5d38b8da41b580,State:CONTAINER_RUNNING,CreatedAt:1713783715062663231,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b28a4a6f-1613-4ede-9ef7-9ce0d306ae0b,},Annotations:map[string]string{io.kubernetes.container.hash: 6e5a261e,io.kubernetes.container.restartCount: 0,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a592bd33227f08310778e93b954e5e71c99e0485203c3b13fbc03aa7f5676df5,PodSandboxId:1c760c0860f20e696acaaeda261f7a3356ceec47b8215cc7e001a79527c55698,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1713783713983809778,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-b5fc48f67-ddg72,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 04d08cf7-e475-4cf9-82c3-48152255f019,},Annotations:map[string]string{io.kubernetes.container
.hash: 94ede15,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ea71d86fc69c618608b9b9e1a75416bfaa65b9232953b6dff9c9cce22e89c5,PodSandboxId:dd7b61237f6ca52764b3aa5dd6656689979d564435fc2cf6aa035a914cb69ea1,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1713783710335012369,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-64454c8b5c-78smp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0a14d820-ef5d-443b-9c56-
4bd9150647c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7d4d4b81,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b10d9cd58e83965c7b248388acf0b293bcafe49f19495b8847a695ab2259ea,PodSandboxId:21ed82f51c9a57ea19146ce50b72a30f81b07c399af3e1ba98fa1467622fb99f,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1713783685347002244,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: bu
sybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7e74d2ed-5b70-476e-82ca-8cfc199a0fe2,},Annotations:map[string]string{io.kubernetes.container.hash: 7a9fdc06,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fe852ccb6f24f6cec31309e97fed419d6975c8ea2621232ecc20c6651533dc2,PodSandboxId:948eecd026da18cfd1e748483ab6d66ead896c9f3679ac22f20b591a5df64dbb,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1713783682426754811,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-no
de-connect-57b4589c47-fwc4t,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e20b825-ece8-407b-8cf5-d0ee1251f79d,},Annotations:map[string]string{io.kubernetes.container.hash: 978975ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8279c1c6c383b97f373e8cce465ec223d7dce8c1827e557042e3bc02dfeca0d7,PodSandboxId:979d0b4f9b989fee00579747ad46e520fd19d607dbed11d343abaec53ad58bb3,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1713783682329478925,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.po
d.name: hello-node-6d85cfcfd8-2fmdd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e141bfa-a355-4da3-aaa2-59333c8f91ed,},Annotations:map[string]string{io.kubernetes.container.hash: caac7018,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd9027e93b686b13cbaf7f1e7b4e0bb5f72a09830debcea872f264b19a08d9bc,PodSandboxId:0d56e7329b0a664d358052a5bf4423364a4a8ab43c570a1340d2275acafc5a02,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713783667681682059,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name:
storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcaa2046-0f49-4a0b-a444-81c8e4daf200,},Annotations:map[string]string{io.kubernetes.container.hash: bdeb952f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72d5f416ed1beeadddf5bef2098654435bfa5ae91c3e7da6b8c454e603ab6a3e,PodSandboxId:eea098d4498332a7581f274f401d67972c86253f4e61e6ef36169dc27b3f2a95,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713783667234594140,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wnvsz,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: f776bc8c-09b4-4ce2-a5f5-cff1d2aeb0ad,},Annotations:map[string]string{io.kubernetes.container.hash: 84ff5ded,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf7c1c7fc529024308fd4d5a0ce7ffdac74f093bea377109f6f595177e84166f,PodSandboxId:8fcf586c47e1bbf5ed4d6498e34f4434e8c8a1f852a8fc30f0971262cbb856ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00
797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713783667152176522,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fs679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3208911-4a95-4246-9433-63c5666433b9,},Annotations:map[string]string{io.kubernetes.container.hash: f71d4e4e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87bc9d56e78f5bc4a7df46ac6bf7f07304cd09b461635bdb412bfff32b7fd8cd,PodSandboxId:00bd237d2b2e72c32fbee3536ea6d8a786b59d926bf91936be824a91ed334a52,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec
{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713783666458206014,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66qb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5779638b-ad27-4440-8b85-2d9496164b2c,},Annotations:map[string]string{io.kubernetes.container.hash: f3998d25,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63cb00927ff1ff4352da0ec907eaec9c696ce08173dcd5da2df5f99730c96f8,PodSandboxId:298c881132b7eed127503a0316524a97e613c2709ae2bed14f14fea3f1fe827d,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487
248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713783646959804031,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-668059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a0044b40568eb66830be38c03827306,},Annotations:map[string]string{io.kubernetes.container.hash: 6a7e86d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4127049f9dda8a1735578b36b7495a09dec2eecc954afa8200b90d0e89d4ac07,PodSandboxId:176c575c0333c957c13032e9ade8cc2b3a1975661814ca19f25193b73ec67b1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,A
nnotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713783646892000550,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-668059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d054316854da035cae7d8a9a9aa36d69,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68c95cb64f2922aa63a9a2179923dc524488581a2f37550ece4792c1a2f5babd,PodSandboxId:6b7bf00e464536884dc526025e67c095f6132811e4e501e37efbe15912c4561f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:ma
p[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713783646859179735,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-668059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bb7470b499382523cf51c9065d1a75c,},Annotations:map[string]string{io.kubernetes.container.hash: a663e06c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a12c0714c8a6b8530ddc53ea5aefe8393eb666d15b87b3aee9206aa0f4867373,PodSandboxId:ca41a3c9c188d1aab805031dbf673e543402f1853e6d29715aeb606ed7bb92c0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713783646781731239,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-668059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c12ce4a4aa6b43f2cabea68b384499,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3ae918c6-d2eb-4d16-9c9b-80aa665f5086 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:06:34 functional-668059 crio[4895]: time="2024-04-22 11:06:34.423766341Z" level=info msg="runSandbox: stopping storage container for sandbox cd254b12a3e7759376352d255367fd9db65d5a0bf3505ad261dc8257f5ff7081" file="resourcestore/resourcecleaner.go:69" id=c888956a-8811-42cb-933a-605635718e42 name=/runtime.v1.RuntimeService/RunPodSandbox
	Apr 22 11:06:34 functional-668059 crio[4895]: time="2024-04-22 11:06:34.424155156Z" level=debug msg="Failed to unmount 4d6f9f9898b1a4c0a4fc7be362cbfc3ed7e5bb02ca68c61f9b144f94d498acaf overlay: /var/lib/containers/storage/overlay/4d6f9f9898b1a4c0a4fc7be362cbfc3ed7e5bb02ca68c61f9b144f94d498acaf/merged - invalid argument" file="overlay/overlay.go:1898"
	Apr 22 11:06:34 functional-668059 crio[4895]: time="2024-04-22 11:06:34.424235391Z" level=debug msg="Failed to remove mountpoint 4d6f9f9898b1a4c0a4fc7be362cbfc3ed7e5bb02ca68c61f9b144f94d498acaf overlay: /var/lib/containers/storage/overlay/4d6f9f9898b1a4c0a4fc7be362cbfc3ed7e5bb02ca68c61f9b144f94d498acaf/merged - directory not empty" file="overlay/overlay.go:1906"
	Apr 22 11:06:34 functional-668059 crio[4895]: time="2024-04-22 11:06:34.424552462Z" level=warning msg="Failed to unmount container cd254b12a3e7759376352d255367fd9db65d5a0bf3505ad261dc8257f5ff7081: removing mount point \"/var/lib/containers/storage/overlay/4d6f9f9898b1a4c0a4fc7be362cbfc3ed7e5bb02ca68c61f9b144f94d498acaf/merged\": directory not empty" file="storage/runtime.go:491" id=c888956a-8811-42cb-933a-605635718e42 name=/runtime.v1.RuntimeService/RunPodSandbox
	Apr 22 11:06:34 functional-668059 crio[4895]: time="2024-04-22 11:06:34.424585082Z" level=error msg="Failed to cleanup (probably retrying): could not stop storage container: cd254b12a3e7759376352d255367fd9db65d5a0bf3505ad261dc8257f5ff7081: removing mount point \"/var/lib/containers/storage/overlay/4d6f9f9898b1a4c0a4fc7be362cbfc3ed7e5bb02ca68c61f9b144f94d498acaf/merged\": directory not empty" file="resourcestore/resourcecleaner.go:71" id=c888956a-8811-42cb-933a-605635718e42 name=/runtime.v1.RuntimeService/RunPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	95aecaf18b388       docker.io/library/nginx@sha256:0463a96ac74b84a8a1b27f3d1f4ae5d1a70ea823219394e131f5bf3536674419                  4 minutes ago       Running             myfrontend                  0                   3cfeff0fffca7       sp-pod
	a592bd33227f0       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   4 minutes ago       Running             dashboard-metrics-scraper   0                   1c760c0860f20       dashboard-metrics-scraper-b5fc48f67-ddg72
	14ea71d86fc69       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb                  4 minutes ago       Running             mysql                       0                   dd7b61237f6ca       mysql-64454c8b5c-78smp
	32b10d9cd58e8       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              5 minutes ago       Exited              mount-munger                0                   21ed82f51c9a5       busybox-mount
	9fe852ccb6f24       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               5 minutes ago       Running             echoserver                  0                   948eecd026da1       hello-node-connect-57b4589c47-fwc4t
	8279c1c6c383b       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               5 minutes ago       Running             echoserver                  0                   979d0b4f9b989       hello-node-6d85cfcfd8-2fmdd
	cd9027e93b686       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 5 minutes ago       Running             storage-provisioner         0                   0d56e7329b0a6       storage-provisioner
	72d5f416ed1be       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                 5 minutes ago       Running             coredns                     0                   eea098d449833       coredns-7db6d8ff4d-wnvsz
	cf7c1c7fc5290       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                 5 minutes ago       Running             coredns                     0                   8fcf586c47e1b       coredns-7db6d8ff4d-fs679
	87bc9d56e78f5       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                                 5 minutes ago       Running             kube-proxy                  0                   00bd237d2b2e7       kube-proxy-66qb6
	a63cb00927ff1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                 5 minutes ago       Running             etcd                        3                   298c881132b7e       etcd-functional-668059
	4127049f9dda8       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                                 5 minutes ago       Running             kube-scheduler              3                   176c575c0333c       kube-scheduler-functional-668059
	68c95cb64f292       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                                 5 minutes ago       Running             kube-apiserver              1                   6b7bf00e46453       kube-apiserver-functional-668059
	a12c0714c8a6b       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                                 5 minutes ago       Running             kube-controller-manager     3                   ca41a3c9c188d       kube-controller-manager-functional-668059
	
	
	==> coredns [72d5f416ed1beeadddf5bef2098654435bfa5ae91c3e7da6b8c454e603ab6a3e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [cf7c1c7fc529024308fd4d5a0ce7ffdac74f093bea377109f6f595177e84166f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               functional-668059
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-668059
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437
	                    minikube.k8s.io/name=functional-668059
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_22T11_00_52_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 11:00:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-668059
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 11:06:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 11:02:24 +0000   Mon, 22 Apr 2024 11:00:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 11:02:24 +0000   Mon, 22 Apr 2024 11:00:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 11:02:24 +0000   Mon, 22 Apr 2024 11:00:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 11:02:24 +0000   Mon, 22 Apr 2024 11:00:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    functional-668059
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a3df4e391084740bd046e968a69018c
	  System UUID:                2a3df4e3-9108-4740-bd04-6e968a69018c
	  Boot ID:                    d1d1f6fd-68fd-4273-9cac-f7ea3d99b8f4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6d85cfcfd8-2fmdd                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m17s
	  default                     hello-node-connect-57b4589c47-fwc4t          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  default                     mysql-64454c8b5c-78smp                       600m (30%!)(MISSING)    700m (35%!)(MISSING)  512Mi (13%!)(MISSING)      700Mi (18%!)(MISSING)    5m2s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  kube-system                 coredns-7db6d8ff4d-fs679                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m29s
	  kube-system                 coredns-7db6d8ff4d-wnvsz                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m29s
	  kube-system                 etcd-functional-668059                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m42s
	  kube-system                 kube-apiserver-functional-668059             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m43s
	  kube-system                 kube-controller-manager-functional-668059    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m42s
	  kube-system                 kube-proxy-66qb6                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m29s
	  kube-system                 kube-scheduler-functional-668059             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m42s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m27s
	  kubernetes-dashboard        dashboard-metrics-scraper-b5fc48f67-ddg72    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kubernetes-dashboard        kubernetes-dashboard-779776cb65-mf8wm        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (72%!)(MISSING)  700m (35%!)(MISSING)
	  memory             752Mi (19%!)(MISSING)  1040Mi (27%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m27s  kube-proxy       
	  Normal  Starting                 5m42s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m42s  kubelet          Node functional-668059 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m42s  kubelet          Node functional-668059 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m42s  kubelet          Node functional-668059 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m42s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m29s  node-controller  Node functional-668059 event: Registered Node functional-668059 in Controller
	
	
	==> dmesg <==
	[  +4.678265] kauditd_printk_skb: 75 callbacks suppressed
	[Apr22 10:54] kauditd_printk_skb: 35 callbacks suppressed
	[  +0.473619] systemd-fstab-generator[3614]: Ignoring "noauto" option for root device
	[ +18.310494] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.650233] systemd-fstab-generator[4714]: Ignoring "noauto" option for root device
	[  +0.186234] systemd-fstab-generator[4770]: Ignoring "noauto" option for root device
	[  +0.183788] systemd-fstab-generator[4785]: Ignoring "noauto" option for root device
	[  +0.141501] systemd-fstab-generator[4797]: Ignoring "noauto" option for root device
	[  +0.299447] systemd-fstab-generator[4825]: Ignoring "noauto" option for root device
	[Apr22 10:56] systemd-fstab-generator[5007]: Ignoring "noauto" option for root device
	[  +0.076326] kauditd_printk_skb: 158 callbacks suppressed
	[  +2.070764] systemd-fstab-generator[5347]: Ignoring "noauto" option for root device
	[  +5.660181] kauditd_printk_skb: 102 callbacks suppressed
	[Apr22 11:00] systemd-fstab-generator[7165]: Ignoring "noauto" option for root device
	[  +6.084172] systemd-fstab-generator[7486]: Ignoring "noauto" option for root device
	[  +0.089404] kauditd_printk_skb: 72 callbacks suppressed
	[Apr22 11:01] systemd-fstab-generator[7709]: Ignoring "noauto" option for root device
	[  +0.113016] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.110757] kauditd_printk_skb: 74 callbacks suppressed
	[  +6.022398] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.048565] kauditd_printk_skb: 32 callbacks suppressed
	[  +7.233273] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.253595] kauditd_printk_skb: 24 callbacks suppressed
	[  +6.565812] kauditd_printk_skb: 8 callbacks suppressed
	[Apr22 11:02] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [a63cb00927ff1ff4352da0ec907eaec9c696ce08173dcd5da2df5f99730c96f8] <==
	{"level":"warn","ts":"2024-04-22T11:01:40.512117Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.27154ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard\" ","response":"range_response_count:1 size:698"}
	{"level":"info","ts":"2024-04-22T11:01:40.512134Z","caller":"traceutil/trace.go:171","msg":"trace[446244032] range","detail":"{range_begin:/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:604; }","duration":"135.310766ms","start":"2024-04-22T11:01:40.376815Z","end":"2024-04-22T11:01:40.512126Z","steps":["trace[446244032] 'agreement among raft nodes before linearized reading'  (duration: 135.248055ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T11:01:43.044487Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"416.80973ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/sp-pod\" ","response":"range_response_count:1 size:3159"}
	{"level":"info","ts":"2024-04-22T11:01:43.044531Z","caller":"traceutil/trace.go:171","msg":"trace[1832553241] range","detail":"{range_begin:/registry/pods/default/sp-pod; range_end:; response_count:1; response_revision:607; }","duration":"416.862184ms","start":"2024-04-22T11:01:42.627658Z","end":"2024-04-22T11:01:43.04452Z","steps":["trace[1832553241] 'range keys from in-memory index tree'  (duration: 416.703341ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T11:01:43.044557Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-22T11:01:42.627624Z","time spent":"416.927622ms","remote":"127.0.0.1:56856","response type":"/etcdserverpb.KV/Range","request count":0,"request size":31,"response count":1,"response size":3182,"request content":"key:\"/registry/pods/default/sp-pod\" "}
	{"level":"warn","ts":"2024-04-22T11:01:43.044712Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"382.646161ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:5 size:14859"}
	{"level":"info","ts":"2024-04-22T11:01:43.044727Z","caller":"traceutil/trace.go:171","msg":"trace[1395516891] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:5; response_revision:607; }","duration":"382.679273ms","start":"2024-04-22T11:01:42.662043Z","end":"2024-04-22T11:01:43.044722Z","steps":["trace[1395516891] 'range keys from in-memory index tree'  (duration: 382.563873ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T11:01:43.044739Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-22T11:01:42.662029Z","time spent":"382.706685ms","remote":"127.0.0.1:56856","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":5,"response size":14882,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2024-04-22T11:01:46.773102Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":4905984622727005252,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-04-22T11:01:46.825326Z","caller":"traceutil/trace.go:171","msg":"trace[765235782] transaction","detail":"{read_only:false; response_revision:613; number_of_response:1; }","duration":"576.438062ms","start":"2024-04-22T11:01:46.248873Z","end":"2024-04-22T11:01:46.825311Z","steps":["trace[765235782] 'process raft request'  (duration: 576.247743ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T11:01:46.825518Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-22T11:01:46.248857Z","time spent":"576.508252ms","remote":"127.0.0.1:56856","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3340,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/default/sp-pod\" mod_revision:612 > success:<request_put:<key:\"/registry/pods/default/sp-pod\" value_size:3303 >> failure:<request_range:<key:\"/registry/pods/default/sp-pod\" > >"}
	{"level":"info","ts":"2024-04-22T11:01:46.829348Z","caller":"traceutil/trace.go:171","msg":"trace[1275082419] linearizableReadLoop","detail":"{readStateIndex:638; appliedIndex:636; }","duration":"556.968009ms","start":"2024-04-22T11:01:46.272367Z","end":"2024-04-22T11:01:46.829335Z","steps":["trace[1275082419] 'read index received'  (duration: 552.762801ms)","trace[1275082419] 'applied index is now lower than readState.Index'  (duration: 4.204175ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-22T11:01:46.829523Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"557.148058ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard\" ","response":"range_response_count:1 size:698"}
	{"level":"info","ts":"2024-04-22T11:01:46.829548Z","caller":"traceutil/trace.go:171","msg":"trace[929480913] range","detail":"{range_begin:/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:613; }","duration":"557.20608ms","start":"2024-04-22T11:01:46.272334Z","end":"2024-04-22T11:01:46.82954Z","steps":["trace[929480913] 'agreement among raft nodes before linearized reading'  (duration: 557.081669ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T11:01:46.829566Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-22T11:01:46.272236Z","time spent":"557.326687ms","remote":"127.0.0.1:56848","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":1,"response size":721,"request content":"key:\"/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard\" "}
	{"level":"warn","ts":"2024-04-22T11:01:46.829576Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"239.815235ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2024-04-22T11:01:46.829605Z","caller":"traceutil/trace.go:171","msg":"trace[1719463418] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:613; }","duration":"239.870669ms","start":"2024-04-22T11:01:46.589726Z","end":"2024-04-22T11:01:46.829597Z","steps":["trace[1719463418] 'agreement among raft nodes before linearized reading'  (duration: 239.778175ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T11:01:46.82974Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.245546ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:5 size:15055"}
	{"level":"info","ts":"2024-04-22T11:01:46.829757Z","caller":"traceutil/trace.go:171","msg":"trace[1119405375] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:5; response_revision:613; }","duration":"168.284286ms","start":"2024-04-22T11:01:46.661467Z","end":"2024-04-22T11:01:46.829752Z","steps":["trace[1119405375] 'agreement among raft nodes before linearized reading'  (duration: 168.1895ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T11:01:51.467023Z","caller":"traceutil/trace.go:171","msg":"trace[71785938] transaction","detail":"{read_only:false; response_revision:629; number_of_response:1; }","duration":"161.381727ms","start":"2024-04-22T11:01:51.305623Z","end":"2024-04-22T11:01:51.467005Z","steps":["trace[71785938] 'process raft request'  (duration: 161.308718ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T11:01:51.467342Z","caller":"traceutil/trace.go:171","msg":"trace[901881095] transaction","detail":"{read_only:false; response_revision:628; number_of_response:1; }","duration":"167.495662ms","start":"2024-04-22T11:01:51.299823Z","end":"2024-04-22T11:01:51.467318Z","steps":["trace[901881095] 'process raft request'  (duration: 164.881208ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T11:01:53.796345Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.55516ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:5 size:14862"}
	{"level":"info","ts":"2024-04-22T11:01:53.796401Z","caller":"traceutil/trace.go:171","msg":"trace[664403899] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:5; response_revision:636; }","duration":"127.648565ms","start":"2024-04-22T11:01:53.668734Z","end":"2024-04-22T11:01:53.796383Z","steps":["trace[664403899] 'range keys from in-memory index tree'  (duration: 127.385074ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T11:01:53.796565Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.929292ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard\" ","response":"range_response_count:1 size:698"}
	{"level":"info","ts":"2024-04-22T11:01:53.796598Z","caller":"traceutil/trace.go:171","msg":"trace[1157416262] range","detail":"{range_begin:/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:636; }","duration":"128.99264ms","start":"2024-04-22T11:01:53.667596Z","end":"2024-04-22T11:01:53.796589Z","steps":["trace[1157416262] 'range keys from in-memory index tree'  (duration: 128.829594ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:06:34 up 14 min,  0 users,  load average: 0.08, 0.36, 0.25
	Linux functional-668059 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [68c95cb64f2922aa63a9a2179923dc524488581a2f37550ece4792c1a2f5babd] <==
	I0422 11:00:51.237235       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0422 11:00:51.244719       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.220]
	I0422 11:00:51.245714       1 controller.go:615] quota admission added evaluator for: endpoints
	I0422 11:00:51.251589       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0422 11:00:51.607046       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0422 11:00:52.105958       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0422 11:00:52.123099       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0422 11:00:52.140185       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0422 11:01:05.812189       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0422 11:01:05.894183       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0422 11:01:13.319331       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.54.51"}
	I0422 11:01:17.612898       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.55.127"}
	I0422 11:01:19.331709       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.107.116.235"}
	I0422 11:01:32.605112       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.100.20.237"}
	I0422 11:01:35.832128       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.131.44"}
	I0422 11:01:35.861907       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.200.243"}
	E0422 11:01:43.190817       1 conn.go:339] Error on socket receive: read tcp 192.168.39.220:8441->192.168.39.1:43590: use of closed network connection
	I0422 11:01:46.826455       1 trace.go:236] Trace[1475401293]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:50ab7df8-122c-41eb-8e90-fd97fafe15d9,client:192.168.39.220,api-group:,api-version:v1,name:sp-pod,subresource:status,namespace:default,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/default/pods/sp-pod/status,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:PATCH (22-Apr-2024 11:01:46.246) (total time: 580ms):
	Trace[1475401293]: ["GuaranteedUpdate etcd3" audit-id:50ab7df8-122c-41eb-8e90-fd97fafe15d9,key:/pods/default/sp-pod,type:*core.Pod,resource:pods 579ms (11:01:46.246)
	Trace[1475401293]:  ---"Txn call completed" 577ms (11:01:46.826)]
	Trace[1475401293]: ---"Object stored in database" 578ms (11:01:46.826)
	Trace[1475401293]: [580.268481ms] [580.268481ms] END
	E0422 11:01:57.870051       1 conn.go:339] Error on socket receive: read tcp 192.168.39.220:8441->192.168.39.1:54492: use of closed network connection
	E0422 11:01:59.232972       1 conn.go:339] Error on socket receive: read tcp 192.168.39.220:8441->192.168.39.1:34596: use of closed network connection
	E0422 11:02:02.213536       1 conn.go:339] Error on socket receive: read tcp 192.168.39.220:8441->192.168.39.1:34624: use of closed network connection
	
	
	==> kube-controller-manager [a12c0714c8a6b8530ddc53ea5aefe8393eb666d15b87b3aee9206aa0f4867373] <==
	E0422 11:01:35.584839       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0422 11:01:35.584942       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="16.935717ms"
	E0422 11:01:35.584982       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0422 11:01:35.593115       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="8.014324ms"
	E0422 11:01:35.593172       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0422 11:01:35.596749       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="9.700506ms"
	E0422 11:01:35.596794       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0422 11:01:35.610403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="17.198313ms"
	E0422 11:01:35.610453       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0422 11:01:35.620502       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="23.144834ms"
	E0422 11:01:35.620823       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0422 11:01:35.631177       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="20.666453ms"
	E0422 11:01:35.631231       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0422 11:01:35.706535       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="75.192046ms"
	I0422 11:01:35.724617       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="18.013561ms"
	I0422 11:01:35.724858       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="53.761µs"
	I0422 11:01:35.740087       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="73.022179ms"
	I0422 11:01:35.748908       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="51.329µs"
	I0422 11:01:35.776907       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="36.736992ms"
	I0422 11:01:35.807104       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="30.074065ms"
	I0422 11:01:35.807230       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="60.017µs"
	I0422 11:01:51.491045       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="19.586072ms"
	I0422 11:01:51.491164       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="56.876µs"
	I0422 11:01:54.336480       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="17.343625ms"
	I0422 11:01:54.339586       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="45.337µs"
	
	
	==> kube-proxy [87bc9d56e78f5bc4a7df46ac6bf7f07304cd09b461635bdb412bfff32b7fd8cd] <==
	I0422 11:01:06.854402       1 server_linux.go:69] "Using iptables proxy"
	I0422 11:01:06.870040       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.220"]
	I0422 11:01:06.965513       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 11:01:06.972484       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 11:01:06.972517       1 server_linux.go:165] "Using iptables Proxier"
	I0422 11:01:07.024155       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 11:01:07.032854       1 server.go:872] "Version info" version="v1.30.0"
	I0422 11:01:07.032876       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 11:01:07.042035       1 config.go:192] "Starting service config controller"
	I0422 11:01:07.042048       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 11:01:07.042074       1 config.go:101] "Starting endpoint slice config controller"
	I0422 11:01:07.042078       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 11:01:07.045931       1 config.go:319] "Starting node config controller"
	I0422 11:01:07.046047       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 11:01:07.150450       1 shared_informer.go:320] Caches are synced for node config
	I0422 11:01:07.152418       1 shared_informer.go:320] Caches are synced for service config
	I0422 11:01:07.142612       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4127049f9dda8a1735578b36b7495a09dec2eecc954afa8200b90d0e89d4ac07] <==
	W0422 11:00:49.622973       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0422 11:00:49.625265       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0422 11:00:49.625403       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0422 11:00:49.625411       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0422 11:00:49.625574       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0422 11:00:49.625583       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0422 11:00:49.625590       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0422 11:00:49.625778       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0422 11:00:49.625596       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0422 11:00:49.625602       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0422 11:00:50.480950       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0422 11:00:50.481082       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0422 11:00:50.529369       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0422 11:00:50.529497       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0422 11:00:50.634996       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0422 11:00:50.635089       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0422 11:00:50.699539       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0422 11:00:50.699597       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0422 11:00:50.769134       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0422 11:00:50.769190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0422 11:00:50.847129       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0422 11:00:50.847231       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0422 11:00:51.069485       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0422 11:00:51.069543       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0422 11:00:54.105912       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 22 11:01:52 functional-668059 kubelet[7493]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 11:01:54 functional-668059 kubelet[7493]: I0422 11:01:54.316151    7493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/mysql-64454c8b5c-78smp" podStartSLOduration=7.539225734 podStartE2EDuration="22.316132079s" podCreationTimestamp="2024-04-22 11:01:32 +0000 UTC" firstStartedPulling="2024-04-22 11:01:35.529003014 +0000 UTC m=+43.605668679" lastFinishedPulling="2024-04-22 11:01:50.305909348 +0000 UTC m=+58.382575024" observedRunningTime="2024-04-22 11:01:51.473067416 +0000 UTC m=+59.549733102" watchObservedRunningTime="2024-04-22 11:01:54.316132079 +0000 UTC m=+62.392797762"
	Apr 22 11:01:55 functional-668059 kubelet[7493]: I0422 11:01:55.356504    7493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-ddg72" podStartSLOduration=4.562265287 podStartE2EDuration="20.356490079s" podCreationTimestamp="2024-04-22 11:01:35 +0000 UTC" firstStartedPulling="2024-04-22 11:01:38.174563522 +0000 UTC m=+46.251229190" lastFinishedPulling="2024-04-22 11:01:53.968788318 +0000 UTC m=+62.045453982" observedRunningTime="2024-04-22 11:01:54.318090868 +0000 UTC m=+62.394756550" watchObservedRunningTime="2024-04-22 11:01:55.356490079 +0000 UTC m=+63.433155763"
	Apr 22 11:01:55 functional-668059 kubelet[7493]: I0422 11:01:55.356653    7493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=4.887807917 podStartE2EDuration="9.356648343s" podCreationTimestamp="2024-04-22 11:01:46 +0000 UTC" firstStartedPulling="2024-04-22 11:01:50.562510588 +0000 UTC m=+58.639176253" lastFinishedPulling="2024-04-22 11:01:55.031351013 +0000 UTC m=+63.108016679" observedRunningTime="2024-04-22 11:01:55.356446872 +0000 UTC m=+63.433112555" watchObservedRunningTime="2024-04-22 11:01:55.356648343 +0000 UTC m=+63.433314026"
	Apr 22 11:01:59 functional-668059 kubelet[7493]: E0422 11:01:59.233670    7493 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:55580->127.0.0.1:35009: write tcp 127.0.0.1:55580->127.0.0.1:35009: write: broken pipe
	Apr 22 11:02:52 functional-668059 kubelet[7493]: E0422 11:02:52.197839    7493 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 11:02:52 functional-668059 kubelet[7493]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 11:02:52 functional-668059 kubelet[7493]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 11:02:52 functional-668059 kubelet[7493]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 11:02:52 functional-668059 kubelet[7493]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 11:03:52 functional-668059 kubelet[7493]: E0422 11:03:52.194561    7493 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 11:03:52 functional-668059 kubelet[7493]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 11:03:52 functional-668059 kubelet[7493]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 11:03:52 functional-668059 kubelet[7493]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 11:03:52 functional-668059 kubelet[7493]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 11:04:52 functional-668059 kubelet[7493]: E0422 11:04:52.195090    7493 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 11:04:52 functional-668059 kubelet[7493]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 11:04:52 functional-668059 kubelet[7493]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 11:04:52 functional-668059 kubelet[7493]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 11:04:52 functional-668059 kubelet[7493]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 11:05:52 functional-668059 kubelet[7493]: E0422 11:05:52.193343    7493 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 11:05:52 functional-668059 kubelet[7493]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 11:05:52 functional-668059 kubelet[7493]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 11:05:52 functional-668059 kubelet[7493]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 11:05:52 functional-668059 kubelet[7493]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [cd9027e93b686b13cbaf7f1e7b4e0bb5f72a09830debcea872f264b19a08d9bc] <==
	I0422 11:01:07.775235       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0422 11:01:07.789030       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0422 11:01:07.789181       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0422 11:01:07.797586       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0422 11:01:07.797770       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-668059_f7038a4d-ac56-4206-9ef5-466e6847c789!
	I0422 11:01:07.798939       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d67d3c4b-dd90-4f60-b19b-3decacdc5f37", APIVersion:"v1", ResourceVersion:"368", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-668059_f7038a4d-ac56-4206-9ef5-466e6847c789 became leader
	I0422 11:01:07.897975       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-668059_f7038a4d-ac56-4206-9ef5-466e6847c789!
	I0422 11:01:24.070725       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0422 11:01:24.073735       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"1f8189d6-97a4-456f-8956-cedd9ef6eace", APIVersion:"v1", ResourceVersion:"490", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0422 11:01:24.072802       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    f159f273-d2c2-41d8-84cc-95a28ba3c539 345 0 2024-04-22 11:01:06 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-04-22 11:01:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-1f8189d6-97a4-456f-8956-cedd9ef6eace &PersistentVolumeClaim{ObjectMeta:{myclaim  default  1f8189d6-97a4-456f-8956-cedd9ef6eace 490 0 2024-04-22 11:01:24 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-04-22 11:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-04-22 11:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0422 11:01:24.077602       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-1f8189d6-97a4-456f-8956-cedd9ef6eace" provisioned
	I0422 11:01:24.077725       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0422 11:01:24.077736       1 volume_store.go:212] Trying to save persistentvolume "pvc-1f8189d6-97a4-456f-8956-cedd9ef6eace"
	I0422 11:01:24.101614       1 volume_store.go:219] persistentvolume "pvc-1f8189d6-97a4-456f-8956-cedd9ef6eace" saved
	I0422 11:01:24.101729       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"1f8189d6-97a4-456f-8956-cedd9ef6eace", APIVersion:"v1", ResourceVersion:"490", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-1f8189d6-97a4-456f-8956-cedd9ef6eace
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-668059 -n functional-668059
helpers_test.go:261: (dbg) Run:  kubectl --context functional-668059 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount kubernetes-dashboard-779776cb65-mf8wm
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-668059 describe pod busybox-mount kubernetes-dashboard-779776cb65-mf8wm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-668059 describe pod busybox-mount kubernetes-dashboard-779776cb65-mf8wm: exit status 1 (66.521588ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-668059/192.168.39.220
	Start Time:       Mon, 22 Apr 2024 11:01:21 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  cri-o://32b10d9cd58e83965c7b248388acf0b293bcafe49f19495b8847a695ab2259ea
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 22 Apr 2024 11:01:25 +0000
	      Finished:     Mon, 22 Apr 2024 11:01:25 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5nqb5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-5nqb5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m14s  default-scheduler  Successfully assigned default/busybox-mount to functional-668059
	  Normal  Pulling    5m13s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m10s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.722s (2.722s including waiting). Image size: 4631262 bytes.
	  Normal  Created    5m10s  kubelet            Created container mount-munger
	  Normal  Started    5m10s  kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "kubernetes-dashboard-779776cb65-mf8wm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-668059 describe pod busybox-mount kubernetes-dashboard-779776cb65-mf8wm: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 node stop m02 -v=7 --alsologtostderr
E0422 11:11:20.203527   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
E0422 11:11:22.764276   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
E0422 11:11:27.884852   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
E0422 11:11:38.126006   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
E0422 11:11:57.324696   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
E0422 11:11:58.606326   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
E0422 11:12:39.567148   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-821265 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.496672018s)

                                                
                                                
-- stdout --
	* Stopping node "ha-821265-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 11:11:19.060680   31749 out.go:291] Setting OutFile to fd 1 ...
	I0422 11:11:19.060870   31749 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:11:19.060882   31749 out.go:304] Setting ErrFile to fd 2...
	I0422 11:11:19.060888   31749 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:11:19.061100   31749 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
	I0422 11:11:19.061436   31749 mustload.go:65] Loading cluster: ha-821265
	I0422 11:11:19.061981   31749 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:11:19.062005   31749 stop.go:39] StopHost: ha-821265-m02
	I0422 11:11:19.062560   31749 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:11:19.062619   31749 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:11:19.078170   31749 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45549
	I0422 11:11:19.078677   31749 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:11:19.079294   31749 main.go:141] libmachine: Using API Version  1
	I0422 11:11:19.079317   31749 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:11:19.079635   31749 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:11:19.082193   31749 out.go:177] * Stopping node "ha-821265-m02"  ...
	I0422 11:11:19.083536   31749 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0422 11:11:19.083569   31749 main.go:141] libmachine: (ha-821265-m02) Calling .DriverName
	I0422 11:11:19.083851   31749 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0422 11:11:19.083881   31749 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHHostname
	I0422 11:11:19.086854   31749 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:11:19.087333   31749 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:11:19.087360   31749 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:11:19.087505   31749 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHPort
	I0422 11:11:19.087677   31749 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:11:19.087829   31749 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHUsername
	I0422 11:11:19.087992   31749 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02/id_rsa Username:docker}
	I0422 11:11:19.172875   31749 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0422 11:11:19.227722   31749 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0422 11:11:19.284557   31749 main.go:141] libmachine: Stopping "ha-821265-m02"...
	I0422 11:11:19.284583   31749 main.go:141] libmachine: (ha-821265-m02) Calling .GetState
	I0422 11:11:19.286074   31749 main.go:141] libmachine: (ha-821265-m02) Calling .Stop
	I0422 11:11:19.289485   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 0/120
	I0422 11:11:20.291168   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 1/120
	I0422 11:11:21.292582   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 2/120
	I0422 11:11:22.293919   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 3/120
	I0422 11:11:23.295170   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 4/120
	I0422 11:11:24.297174   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 5/120
	I0422 11:11:25.298444   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 6/120
	I0422 11:11:26.300365   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 7/120
	I0422 11:11:27.301666   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 8/120
	I0422 11:11:28.303772   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 9/120
	I0422 11:11:29.306469   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 10/120
	I0422 11:11:30.308755   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 11/120
	I0422 11:11:31.310204   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 12/120
	I0422 11:11:32.311624   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 13/120
	I0422 11:11:33.313253   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 14/120
	I0422 11:11:34.314887   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 15/120
	I0422 11:11:35.316692   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 16/120
	I0422 11:11:36.318111   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 17/120
	I0422 11:11:37.320038   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 18/120
	I0422 11:11:38.321770   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 19/120
	I0422 11:11:39.323736   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 20/120
	I0422 11:11:40.325306   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 21/120
	I0422 11:11:41.327341   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 22/120
	I0422 11:11:42.328650   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 23/120
	I0422 11:11:43.329982   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 24/120
	I0422 11:11:44.331635   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 25/120
	I0422 11:11:45.333073   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 26/120
	I0422 11:11:46.335219   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 27/120
	I0422 11:11:47.336525   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 28/120
	I0422 11:11:48.338140   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 29/120
	I0422 11:11:49.340408   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 30/120
	I0422 11:11:50.342016   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 31/120
	I0422 11:11:51.343532   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 32/120
	I0422 11:11:52.344913   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 33/120
	I0422 11:11:53.347153   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 34/120
	I0422 11:11:54.349320   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 35/120
	I0422 11:11:55.351344   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 36/120
	I0422 11:11:56.353513   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 37/120
	I0422 11:11:57.355365   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 38/120
	I0422 11:11:58.356855   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 39/120
	I0422 11:11:59.359220   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 40/120
	I0422 11:12:00.360978   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 41/120
	I0422 11:12:01.363134   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 42/120
	I0422 11:12:02.364928   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 43/120
	I0422 11:12:03.366197   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 44/120
	I0422 11:12:04.368085   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 45/120
	I0422 11:12:05.370097   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 46/120
	I0422 11:12:06.372202   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 47/120
	I0422 11:12:07.373726   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 48/120
	I0422 11:12:08.375314   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 49/120
	I0422 11:12:09.377650   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 50/120
	I0422 11:12:10.379248   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 51/120
	I0422 11:12:11.380758   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 52/120
	I0422 11:12:12.382461   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 53/120
	I0422 11:12:13.383757   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 54/120
	I0422 11:12:14.386293   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 55/120
	I0422 11:12:15.387653   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 56/120
	I0422 11:12:16.388990   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 57/120
	I0422 11:12:17.391380   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 58/120
	I0422 11:12:18.392697   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 59/120
	I0422 11:12:19.394942   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 60/120
	I0422 11:12:20.396586   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 61/120
	I0422 11:12:21.397841   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 62/120
	I0422 11:12:22.399819   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 63/120
	I0422 11:12:23.401885   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 64/120
	I0422 11:12:24.403872   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 65/120
	I0422 11:12:25.405547   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 66/120
	I0422 11:12:26.407384   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 67/120
	I0422 11:12:27.408622   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 68/120
	I0422 11:12:28.410027   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 69/120
	I0422 11:12:29.412471   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 70/120
	I0422 11:12:30.414457   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 71/120
	I0422 11:12:31.416257   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 72/120
	I0422 11:12:32.417757   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 73/120
	I0422 11:12:33.419150   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 74/120
	I0422 11:12:34.421219   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 75/120
	I0422 11:12:35.423423   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 76/120
	I0422 11:12:36.424973   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 77/120
	I0422 11:12:37.427454   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 78/120
	I0422 11:12:38.429237   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 79/120
	I0422 11:12:39.431743   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 80/120
	I0422 11:12:40.433343   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 81/120
	I0422 11:12:41.435491   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 82/120
	I0422 11:12:42.437484   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 83/120
	I0422 11:12:43.439328   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 84/120
	I0422 11:12:44.441393   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 85/120
	I0422 11:12:45.443275   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 86/120
	I0422 11:12:46.444786   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 87/120
	I0422 11:12:47.446166   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 88/120
	I0422 11:12:48.447760   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 89/120
	I0422 11:12:49.449366   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 90/120
	I0422 11:12:50.450988   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 91/120
	I0422 11:12:51.452501   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 92/120
	I0422 11:12:52.454151   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 93/120
	I0422 11:12:53.455528   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 94/120
	I0422 11:12:54.457094   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 95/120
	I0422 11:12:55.458681   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 96/120
	I0422 11:12:56.460877   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 97/120
	I0422 11:12:57.462225   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 98/120
	I0422 11:12:58.463724   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 99/120
	I0422 11:12:59.466046   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 100/120
	I0422 11:13:00.467284   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 101/120
	I0422 11:13:01.468597   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 102/120
	I0422 11:13:02.470331   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 103/120
	I0422 11:13:03.471900   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 104/120
	I0422 11:13:04.473968   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 105/120
	I0422 11:13:05.475557   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 106/120
	I0422 11:13:06.477179   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 107/120
	I0422 11:13:07.479393   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 108/120
	I0422 11:13:08.480687   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 109/120
	I0422 11:13:09.482966   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 110/120
	I0422 11:13:10.484570   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 111/120
	I0422 11:13:11.486014   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 112/120
	I0422 11:13:12.487402   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 113/120
	I0422 11:13:13.489684   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 114/120
	I0422 11:13:14.491690   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 115/120
	I0422 11:13:15.493237   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 116/120
	I0422 11:13:16.495284   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 117/120
	I0422 11:13:17.496657   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 118/120
	I0422 11:13:18.498806   31749 main.go:141] libmachine: (ha-821265-m02) Waiting for machine to stop 119/120
	I0422 11:13:19.500166   31749 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0422 11:13:19.500310   31749 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-821265 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-821265 status -v=7 --alsologtostderr: exit status 3 (19.258184571s)

                                                
                                                
-- stdout --
	ha-821265
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-821265-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-821265-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-821265-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 11:13:19.557779   32196 out.go:291] Setting OutFile to fd 1 ...
	I0422 11:13:19.557894   32196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:13:19.557902   32196 out.go:304] Setting ErrFile to fd 2...
	I0422 11:13:19.557906   32196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:13:19.558101   32196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
	I0422 11:13:19.558269   32196 out.go:298] Setting JSON to false
	I0422 11:13:19.558294   32196 mustload.go:65] Loading cluster: ha-821265
	I0422 11:13:19.558423   32196 notify.go:220] Checking for updates...
	I0422 11:13:19.558749   32196 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:13:19.558766   32196 status.go:255] checking status of ha-821265 ...
	I0422 11:13:19.559271   32196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:19.559348   32196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:19.583768   32196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35419
	I0422 11:13:19.584183   32196 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:19.584837   32196 main.go:141] libmachine: Using API Version  1
	I0422 11:13:19.584861   32196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:19.585170   32196 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:19.585381   32196 main.go:141] libmachine: (ha-821265) Calling .GetState
	I0422 11:13:19.586801   32196 status.go:330] ha-821265 host status = "Running" (err=<nil>)
	I0422 11:13:19.586826   32196 host.go:66] Checking if "ha-821265" exists ...
	I0422 11:13:19.587111   32196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:19.587141   32196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:19.601555   32196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44713
	I0422 11:13:19.601937   32196 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:19.602363   32196 main.go:141] libmachine: Using API Version  1
	I0422 11:13:19.602387   32196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:19.602660   32196 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:19.602862   32196 main.go:141] libmachine: (ha-821265) Calling .GetIP
	I0422 11:13:19.605691   32196 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:13:19.606126   32196 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:13:19.606163   32196 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:13:19.606272   32196 host.go:66] Checking if "ha-821265" exists ...
	I0422 11:13:19.606645   32196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:19.606687   32196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:19.621262   32196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41279
	I0422 11:13:19.621603   32196 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:19.622073   32196 main.go:141] libmachine: Using API Version  1
	I0422 11:13:19.622093   32196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:19.622363   32196 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:19.622530   32196 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:13:19.622688   32196 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:13:19.622729   32196 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:13:19.625932   32196 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:13:19.626350   32196 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:13:19.626376   32196 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:13:19.626529   32196 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:13:19.626693   32196 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:13:19.626850   32196 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:13:19.626971   32196 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:13:19.712310   32196 ssh_runner.go:195] Run: systemctl --version
	I0422 11:13:19.721429   32196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:13:19.741788   32196 kubeconfig.go:125] found "ha-821265" server: "https://192.168.39.254:8443"
	I0422 11:13:19.741812   32196 api_server.go:166] Checking apiserver status ...
	I0422 11:13:19.741840   32196 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 11:13:19.758273   32196 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup
	W0422 11:13:19.769630   32196 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 11:13:19.769679   32196 ssh_runner.go:195] Run: ls
	I0422 11:13:19.775868   32196 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 11:13:19.780213   32196 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 11:13:19.780235   32196 status.go:422] ha-821265 apiserver status = Running (err=<nil>)
	I0422 11:13:19.780247   32196 status.go:257] ha-821265 status: &{Name:ha-821265 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 11:13:19.780272   32196 status.go:255] checking status of ha-821265-m02 ...
	I0422 11:13:19.780581   32196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:19.780629   32196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:19.795409   32196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40405
	I0422 11:13:19.795795   32196 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:19.796308   32196 main.go:141] libmachine: Using API Version  1
	I0422 11:13:19.796338   32196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:19.796648   32196 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:19.796865   32196 main.go:141] libmachine: (ha-821265-m02) Calling .GetState
	I0422 11:13:19.798732   32196 status.go:330] ha-821265-m02 host status = "Running" (err=<nil>)
	I0422 11:13:19.798752   32196 host.go:66] Checking if "ha-821265-m02" exists ...
	I0422 11:13:19.799113   32196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:19.799154   32196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:19.814029   32196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38599
	I0422 11:13:19.814432   32196 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:19.814926   32196 main.go:141] libmachine: Using API Version  1
	I0422 11:13:19.814951   32196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:19.815244   32196 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:19.815446   32196 main.go:141] libmachine: (ha-821265-m02) Calling .GetIP
	I0422 11:13:19.818217   32196 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:13:19.818712   32196 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:13:19.818744   32196 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:13:19.818927   32196 host.go:66] Checking if "ha-821265-m02" exists ...
	I0422 11:13:19.819220   32196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:19.819263   32196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:19.833646   32196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45829
	I0422 11:13:19.834091   32196 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:19.834662   32196 main.go:141] libmachine: Using API Version  1
	I0422 11:13:19.834686   32196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:19.834995   32196 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:19.835207   32196 main.go:141] libmachine: (ha-821265-m02) Calling .DriverName
	I0422 11:13:19.835404   32196 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:13:19.835428   32196 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHHostname
	I0422 11:13:19.838379   32196 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:13:19.838831   32196 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:13:19.838853   32196 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:13:19.838982   32196 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHPort
	I0422 11:13:19.839126   32196 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:13:19.839306   32196 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHUsername
	I0422 11:13:19.839458   32196 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02/id_rsa Username:docker}
	W0422 11:13:38.377028   32196 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.39:22: connect: no route to host
	W0422 11:13:38.377134   32196 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	E0422 11:13:38.377154   32196 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	I0422 11:13:38.377166   32196 status.go:257] ha-821265-m02 status: &{Name:ha-821265-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0422 11:13:38.377191   32196 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	I0422 11:13:38.377201   32196 status.go:255] checking status of ha-821265-m03 ...
	I0422 11:13:38.377675   32196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:38.377737   32196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:38.392995   32196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34151
	I0422 11:13:38.393392   32196 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:38.393813   32196 main.go:141] libmachine: Using API Version  1
	I0422 11:13:38.393837   32196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:38.394128   32196 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:38.394272   32196 main.go:141] libmachine: (ha-821265-m03) Calling .GetState
	I0422 11:13:38.395729   32196 status.go:330] ha-821265-m03 host status = "Running" (err=<nil>)
	I0422 11:13:38.395741   32196 host.go:66] Checking if "ha-821265-m03" exists ...
	I0422 11:13:38.396016   32196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:38.396048   32196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:38.410162   32196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40351
	I0422 11:13:38.410609   32196 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:38.411093   32196 main.go:141] libmachine: Using API Version  1
	I0422 11:13:38.411113   32196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:38.411395   32196 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:38.411604   32196 main.go:141] libmachine: (ha-821265-m03) Calling .GetIP
	I0422 11:13:38.414398   32196 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:13:38.414756   32196 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:13:38.414772   32196 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:13:38.414918   32196 host.go:66] Checking if "ha-821265-m03" exists ...
	I0422 11:13:38.415224   32196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:38.415272   32196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:38.431716   32196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39229
	I0422 11:13:38.432115   32196 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:38.432619   32196 main.go:141] libmachine: Using API Version  1
	I0422 11:13:38.432642   32196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:38.432974   32196 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:38.433170   32196 main.go:141] libmachine: (ha-821265-m03) Calling .DriverName
	I0422 11:13:38.433368   32196 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:13:38.433392   32196 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHHostname
	I0422 11:13:38.436293   32196 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:13:38.436810   32196 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:13:38.436834   32196 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:13:38.437065   32196 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHPort
	I0422 11:13:38.437286   32196 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:13:38.437450   32196 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHUsername
	I0422 11:13:38.437617   32196 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03/id_rsa Username:docker}
	I0422 11:13:38.524837   32196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:13:38.546169   32196 kubeconfig.go:125] found "ha-821265" server: "https://192.168.39.254:8443"
	I0422 11:13:38.546198   32196 api_server.go:166] Checking apiserver status ...
	I0422 11:13:38.546237   32196 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 11:13:38.569680   32196 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1605/cgroup
	W0422 11:13:38.581313   32196 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1605/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 11:13:38.581359   32196 ssh_runner.go:195] Run: ls
	I0422 11:13:38.586372   32196 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 11:13:38.592957   32196 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 11:13:38.592979   32196 status.go:422] ha-821265-m03 apiserver status = Running (err=<nil>)
	I0422 11:13:38.592989   32196 status.go:257] ha-821265-m03 status: &{Name:ha-821265-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 11:13:38.593010   32196 status.go:255] checking status of ha-821265-m04 ...
	I0422 11:13:38.593308   32196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:38.593354   32196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:38.607654   32196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43593
	I0422 11:13:38.608061   32196 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:38.608484   32196 main.go:141] libmachine: Using API Version  1
	I0422 11:13:38.608505   32196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:38.608839   32196 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:38.609016   32196 main.go:141] libmachine: (ha-821265-m04) Calling .GetState
	I0422 11:13:38.610450   32196 status.go:330] ha-821265-m04 host status = "Running" (err=<nil>)
	I0422 11:13:38.610465   32196 host.go:66] Checking if "ha-821265-m04" exists ...
	I0422 11:13:38.610777   32196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:38.610821   32196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:38.625973   32196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36993
	I0422 11:13:38.626478   32196 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:38.626961   32196 main.go:141] libmachine: Using API Version  1
	I0422 11:13:38.626987   32196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:38.627311   32196 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:38.627487   32196 main.go:141] libmachine: (ha-821265-m04) Calling .GetIP
	I0422 11:13:38.630394   32196 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:13:38.630909   32196 main.go:141] libmachine: (ha-821265-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:86:f0", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:10:32 +0000 UTC Type:0 Mac:52:54:00:02:86:f0 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-821265-m04 Clientid:01:52:54:00:02:86:f0}
	I0422 11:13:38.630938   32196 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:13:38.631075   32196 host.go:66] Checking if "ha-821265-m04" exists ...
	I0422 11:13:38.631402   32196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:38.631437   32196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:38.646093   32196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46493
	I0422 11:13:38.646485   32196 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:38.646894   32196 main.go:141] libmachine: Using API Version  1
	I0422 11:13:38.646908   32196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:38.647224   32196 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:38.647401   32196 main.go:141] libmachine: (ha-821265-m04) Calling .DriverName
	I0422 11:13:38.647570   32196 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:13:38.647586   32196 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHHostname
	I0422 11:13:38.650155   32196 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:13:38.650567   32196 main.go:141] libmachine: (ha-821265-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:86:f0", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:10:32 +0000 UTC Type:0 Mac:52:54:00:02:86:f0 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-821265-m04 Clientid:01:52:54:00:02:86:f0}
	I0422 11:13:38.650601   32196 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:13:38.650737   32196 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHPort
	I0422 11:13:38.650895   32196 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHKeyPath
	I0422 11:13:38.651046   32196 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHUsername
	I0422 11:13:38.651187   32196 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m04/id_rsa Username:docker}
	I0422 11:13:38.739143   32196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:13:38.757868   32196 status.go:257] ha-821265-m04 status: &{Name:ha-821265-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-821265 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-821265 -n ha-821265
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-821265 logs -n 25: (1.595645383s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-821265 cp ha-821265-m03:/home/docker/cp-test.txt                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1102049705/001/cp-test_ha-821265-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n                                                                 | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-821265 cp ha-821265-m03:/home/docker/cp-test.txt                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265:/home/docker/cp-test_ha-821265-m03_ha-821265.txt                       |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n                                                                 | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n ha-821265 sudo cat                                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | /home/docker/cp-test_ha-821265-m03_ha-821265.txt                                 |           |         |         |                     |                     |
	| cp      | ha-821265 cp ha-821265-m03:/home/docker/cp-test.txt                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m02:/home/docker/cp-test_ha-821265-m03_ha-821265-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n                                                                 | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n ha-821265-m02 sudo cat                                          | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | /home/docker/cp-test_ha-821265-m03_ha-821265-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-821265 cp ha-821265-m03:/home/docker/cp-test.txt                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m04:/home/docker/cp-test_ha-821265-m03_ha-821265-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n                                                                 | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n ha-821265-m04 sudo cat                                          | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | /home/docker/cp-test_ha-821265-m03_ha-821265-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-821265 cp testdata/cp-test.txt                                                | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n                                                                 | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-821265 cp ha-821265-m04:/home/docker/cp-test.txt                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1102049705/001/cp-test_ha-821265-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n                                                                 | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-821265 cp ha-821265-m04:/home/docker/cp-test.txt                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265:/home/docker/cp-test_ha-821265-m04_ha-821265.txt                       |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n                                                                 | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n ha-821265 sudo cat                                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | /home/docker/cp-test_ha-821265-m04_ha-821265.txt                                 |           |         |         |                     |                     |
	| cp      | ha-821265 cp ha-821265-m04:/home/docker/cp-test.txt                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m02:/home/docker/cp-test_ha-821265-m04_ha-821265-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n                                                                 | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n ha-821265-m02 sudo cat                                          | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | /home/docker/cp-test_ha-821265-m04_ha-821265-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-821265 cp ha-821265-m04:/home/docker/cp-test.txt                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m03:/home/docker/cp-test_ha-821265-m04_ha-821265-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n                                                                 | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n ha-821265-m03 sudo cat                                          | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | /home/docker/cp-test_ha-821265-m04_ha-821265-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-821265 node stop m02 -v=7                                                     | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 11:06:36
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 11:06:36.919621   27717 out.go:291] Setting OutFile to fd 1 ...
	I0422 11:06:36.919762   27717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:06:36.919772   27717 out.go:304] Setting ErrFile to fd 2...
	I0422 11:06:36.919776   27717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:06:36.920011   27717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
	I0422 11:06:36.920598   27717 out.go:298] Setting JSON to false
	I0422 11:06:36.921508   27717 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":2940,"bootTime":1713781057,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 11:06:36.921564   27717 start.go:139] virtualization: kvm guest
	I0422 11:06:36.924070   27717 out.go:177] * [ha-821265] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 11:06:36.925731   27717 out.go:177]   - MINIKUBE_LOCATION=18711
	I0422 11:06:36.925754   27717 notify.go:220] Checking for updates...
	I0422 11:06:36.927327   27717 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 11:06:36.929125   27717 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18711-7633/kubeconfig
	I0422 11:06:36.930866   27717 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 11:06:36.932528   27717 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 11:06:36.933849   27717 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 11:06:36.935461   27717 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 11:06:36.970577   27717 out.go:177] * Using the kvm2 driver based on user configuration
	I0422 11:06:36.971929   27717 start.go:297] selected driver: kvm2
	I0422 11:06:36.971944   27717 start.go:901] validating driver "kvm2" against <nil>
	I0422 11:06:36.971968   27717 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 11:06:36.972628   27717 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 11:06:36.972698   27717 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18711-7633/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 11:06:36.987477   27717 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 11:06:36.987571   27717 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0422 11:06:36.987822   27717 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 11:06:36.987880   27717 cni.go:84] Creating CNI manager for ""
	I0422 11:06:36.987892   27717 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0422 11:06:36.987899   27717 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0422 11:06:36.987951   27717 start.go:340] cluster config:
	{Name:ha-821265 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-821265 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0422 11:06:36.988054   27717 iso.go:125] acquiring lock: {Name:mkb6ac9fd17ffabc92a94047094130aad6203a95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 11:06:36.990053   27717 out.go:177] * Starting "ha-821265" primary control-plane node in "ha-821265" cluster
	I0422 11:06:36.991343   27717 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 11:06:36.991387   27717 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0422 11:06:36.991394   27717 cache.go:56] Caching tarball of preloaded images
	I0422 11:06:36.991465   27717 preload.go:173] Found /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 11:06:36.991475   27717 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 11:06:36.991772   27717 profile.go:143] Saving config to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/config.json ...
	I0422 11:06:36.991791   27717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/config.json: {Name:mk1d94c9e38faf6fed2be29eb597dfabf13d6e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:06:36.991917   27717 start.go:360] acquireMachinesLock for ha-821265: {Name:mk5cb9b294e703b264c1f97ac968ffd01e93b576 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 11:06:36.991945   27717 start.go:364] duration metric: took 14.45µs to acquireMachinesLock for "ha-821265"
	I0422 11:06:36.991960   27717 start.go:93] Provisioning new machine with config: &{Name:ha-821265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:h
a-821265 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 11:06:36.992013   27717 start.go:125] createHost starting for "" (driver="kvm2")
	I0422 11:06:36.993682   27717 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0422 11:06:36.993801   27717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:06:36.993839   27717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:06:37.007885   27717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44193
	I0422 11:06:37.008312   27717 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:06:37.008926   27717 main.go:141] libmachine: Using API Version  1
	I0422 11:06:37.008958   27717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:06:37.009325   27717 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:06:37.009545   27717 main.go:141] libmachine: (ha-821265) Calling .GetMachineName
	I0422 11:06:37.009729   27717 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:06:37.009882   27717 start.go:159] libmachine.API.Create for "ha-821265" (driver="kvm2")
	I0422 11:06:37.009910   27717 client.go:168] LocalClient.Create starting
	I0422 11:06:37.009945   27717 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem
	I0422 11:06:37.009987   27717 main.go:141] libmachine: Decoding PEM data...
	I0422 11:06:37.010001   27717 main.go:141] libmachine: Parsing certificate...
	I0422 11:06:37.010050   27717 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem
	I0422 11:06:37.010067   27717 main.go:141] libmachine: Decoding PEM data...
	I0422 11:06:37.010079   27717 main.go:141] libmachine: Parsing certificate...
	I0422 11:06:37.010092   27717 main.go:141] libmachine: Running pre-create checks...
	I0422 11:06:37.010100   27717 main.go:141] libmachine: (ha-821265) Calling .PreCreateCheck
	I0422 11:06:37.010493   27717 main.go:141] libmachine: (ha-821265) Calling .GetConfigRaw
	I0422 11:06:37.010914   27717 main.go:141] libmachine: Creating machine...
	I0422 11:06:37.010927   27717 main.go:141] libmachine: (ha-821265) Calling .Create
	I0422 11:06:37.011077   27717 main.go:141] libmachine: (ha-821265) Creating KVM machine...
	I0422 11:06:37.012339   27717 main.go:141] libmachine: (ha-821265) DBG | found existing default KVM network
	I0422 11:06:37.012967   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:37.012822   27741 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0422 11:06:37.012990   27717 main.go:141] libmachine: (ha-821265) DBG | created network xml: 
	I0422 11:06:37.012999   27717 main.go:141] libmachine: (ha-821265) DBG | <network>
	I0422 11:06:37.013004   27717 main.go:141] libmachine: (ha-821265) DBG |   <name>mk-ha-821265</name>
	I0422 11:06:37.013010   27717 main.go:141] libmachine: (ha-821265) DBG |   <dns enable='no'/>
	I0422 11:06:37.013020   27717 main.go:141] libmachine: (ha-821265) DBG |   
	I0422 11:06:37.013029   27717 main.go:141] libmachine: (ha-821265) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0422 11:06:37.013034   27717 main.go:141] libmachine: (ha-821265) DBG |     <dhcp>
	I0422 11:06:37.013043   27717 main.go:141] libmachine: (ha-821265) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0422 11:06:37.013048   27717 main.go:141] libmachine: (ha-821265) DBG |     </dhcp>
	I0422 11:06:37.013054   27717 main.go:141] libmachine: (ha-821265) DBG |   </ip>
	I0422 11:06:37.013059   27717 main.go:141] libmachine: (ha-821265) DBG |   
	I0422 11:06:37.013064   27717 main.go:141] libmachine: (ha-821265) DBG | </network>
	I0422 11:06:37.013071   27717 main.go:141] libmachine: (ha-821265) DBG | 
	I0422 11:06:37.018249   27717 main.go:141] libmachine: (ha-821265) DBG | trying to create private KVM network mk-ha-821265 192.168.39.0/24...
	I0422 11:06:37.082455   27717 main.go:141] libmachine: (ha-821265) DBG | private KVM network mk-ha-821265 192.168.39.0/24 created
	I0422 11:06:37.082482   27717 main.go:141] libmachine: (ha-821265) Setting up store path in /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265 ...
	I0422 11:06:37.082496   27717 main.go:141] libmachine: (ha-821265) Building disk image from file:///home/jenkins/minikube-integration/18711-7633/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0422 11:06:37.082576   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:37.082503   27741 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 11:06:37.082767   27717 main.go:141] libmachine: (ha-821265) Downloading /home/jenkins/minikube-integration/18711-7633/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18711-7633/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0422 11:06:37.315869   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:37.315744   27741 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa...
	I0422 11:06:37.473307   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:37.473180   27741 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/ha-821265.rawdisk...
	I0422 11:06:37.473340   27717 main.go:141] libmachine: (ha-821265) DBG | Writing magic tar header
	I0422 11:06:37.473354   27717 main.go:141] libmachine: (ha-821265) DBG | Writing SSH key tar header
	I0422 11:06:37.473371   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:37.473325   27741 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265 ...
	I0422 11:06:37.473528   27717 main.go:141] libmachine: (ha-821265) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265
	I0422 11:06:37.473562   27717 main.go:141] libmachine: (ha-821265) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265 (perms=drwx------)
	I0422 11:06:37.473570   27717 main.go:141] libmachine: (ha-821265) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633/.minikube/machines
	I0422 11:06:37.473577   27717 main.go:141] libmachine: (ha-821265) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633/.minikube/machines (perms=drwxr-xr-x)
	I0422 11:06:37.473587   27717 main.go:141] libmachine: (ha-821265) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 11:06:37.473605   27717 main.go:141] libmachine: (ha-821265) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633
	I0422 11:06:37.473614   27717 main.go:141] libmachine: (ha-821265) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0422 11:06:37.473626   27717 main.go:141] libmachine: (ha-821265) DBG | Checking permissions on dir: /home/jenkins
	I0422 11:06:37.473631   27717 main.go:141] libmachine: (ha-821265) DBG | Checking permissions on dir: /home
	I0422 11:06:37.473639   27717 main.go:141] libmachine: (ha-821265) DBG | Skipping /home - not owner
	I0422 11:06:37.473648   27717 main.go:141] libmachine: (ha-821265) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633/.minikube (perms=drwxr-xr-x)
	I0422 11:06:37.473655   27717 main.go:141] libmachine: (ha-821265) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633 (perms=drwxrwxr-x)
	I0422 11:06:37.473675   27717 main.go:141] libmachine: (ha-821265) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0422 11:06:37.473685   27717 main.go:141] libmachine: (ha-821265) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0422 11:06:37.473703   27717 main.go:141] libmachine: (ha-821265) Creating domain...
	I0422 11:06:37.474882   27717 main.go:141] libmachine: (ha-821265) define libvirt domain using xml: 
	I0422 11:06:37.474908   27717 main.go:141] libmachine: (ha-821265) <domain type='kvm'>
	I0422 11:06:37.474915   27717 main.go:141] libmachine: (ha-821265)   <name>ha-821265</name>
	I0422 11:06:37.474925   27717 main.go:141] libmachine: (ha-821265)   <memory unit='MiB'>2200</memory>
	I0422 11:06:37.474932   27717 main.go:141] libmachine: (ha-821265)   <vcpu>2</vcpu>
	I0422 11:06:37.474936   27717 main.go:141] libmachine: (ha-821265)   <features>
	I0422 11:06:37.474941   27717 main.go:141] libmachine: (ha-821265)     <acpi/>
	I0422 11:06:37.474948   27717 main.go:141] libmachine: (ha-821265)     <apic/>
	I0422 11:06:37.474953   27717 main.go:141] libmachine: (ha-821265)     <pae/>
	I0422 11:06:37.474964   27717 main.go:141] libmachine: (ha-821265)     
	I0422 11:06:37.474968   27717 main.go:141] libmachine: (ha-821265)   </features>
	I0422 11:06:37.474973   27717 main.go:141] libmachine: (ha-821265)   <cpu mode='host-passthrough'>
	I0422 11:06:37.474979   27717 main.go:141] libmachine: (ha-821265)   
	I0422 11:06:37.474986   27717 main.go:141] libmachine: (ha-821265)   </cpu>
	I0422 11:06:37.475011   27717 main.go:141] libmachine: (ha-821265)   <os>
	I0422 11:06:37.475040   27717 main.go:141] libmachine: (ha-821265)     <type>hvm</type>
	I0422 11:06:37.475118   27717 main.go:141] libmachine: (ha-821265)     <boot dev='cdrom'/>
	I0422 11:06:37.475145   27717 main.go:141] libmachine: (ha-821265)     <boot dev='hd'/>
	I0422 11:06:37.475155   27717 main.go:141] libmachine: (ha-821265)     <bootmenu enable='no'/>
	I0422 11:06:37.475164   27717 main.go:141] libmachine: (ha-821265)   </os>
	I0422 11:06:37.475180   27717 main.go:141] libmachine: (ha-821265)   <devices>
	I0422 11:06:37.475195   27717 main.go:141] libmachine: (ha-821265)     <disk type='file' device='cdrom'>
	I0422 11:06:37.475208   27717 main.go:141] libmachine: (ha-821265)       <source file='/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/boot2docker.iso'/>
	I0422 11:06:37.475240   27717 main.go:141] libmachine: (ha-821265)       <target dev='hdc' bus='scsi'/>
	I0422 11:06:37.475245   27717 main.go:141] libmachine: (ha-821265)       <readonly/>
	I0422 11:06:37.475252   27717 main.go:141] libmachine: (ha-821265)     </disk>
	I0422 11:06:37.475259   27717 main.go:141] libmachine: (ha-821265)     <disk type='file' device='disk'>
	I0422 11:06:37.475270   27717 main.go:141] libmachine: (ha-821265)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0422 11:06:37.475292   27717 main.go:141] libmachine: (ha-821265)       <source file='/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/ha-821265.rawdisk'/>
	I0422 11:06:37.475311   27717 main.go:141] libmachine: (ha-821265)       <target dev='hda' bus='virtio'/>
	I0422 11:06:37.475324   27717 main.go:141] libmachine: (ha-821265)     </disk>
	I0422 11:06:37.475338   27717 main.go:141] libmachine: (ha-821265)     <interface type='network'>
	I0422 11:06:37.475353   27717 main.go:141] libmachine: (ha-821265)       <source network='mk-ha-821265'/>
	I0422 11:06:37.475366   27717 main.go:141] libmachine: (ha-821265)       <model type='virtio'/>
	I0422 11:06:37.475380   27717 main.go:141] libmachine: (ha-821265)     </interface>
	I0422 11:06:37.475397   27717 main.go:141] libmachine: (ha-821265)     <interface type='network'>
	I0422 11:06:37.475410   27717 main.go:141] libmachine: (ha-821265)       <source network='default'/>
	I0422 11:06:37.475421   27717 main.go:141] libmachine: (ha-821265)       <model type='virtio'/>
	I0422 11:06:37.475436   27717 main.go:141] libmachine: (ha-821265)     </interface>
	I0422 11:06:37.475449   27717 main.go:141] libmachine: (ha-821265)     <serial type='pty'>
	I0422 11:06:37.475473   27717 main.go:141] libmachine: (ha-821265)       <target port='0'/>
	I0422 11:06:37.475489   27717 main.go:141] libmachine: (ha-821265)     </serial>
	I0422 11:06:37.475508   27717 main.go:141] libmachine: (ha-821265)     <console type='pty'>
	I0422 11:06:37.475525   27717 main.go:141] libmachine: (ha-821265)       <target type='serial' port='0'/>
	I0422 11:06:37.475540   27717 main.go:141] libmachine: (ha-821265)     </console>
	I0422 11:06:37.475550   27717 main.go:141] libmachine: (ha-821265)     <rng model='virtio'>
	I0422 11:06:37.475562   27717 main.go:141] libmachine: (ha-821265)       <backend model='random'>/dev/random</backend>
	I0422 11:06:37.475568   27717 main.go:141] libmachine: (ha-821265)     </rng>
	I0422 11:06:37.475573   27717 main.go:141] libmachine: (ha-821265)     
	I0422 11:06:37.475579   27717 main.go:141] libmachine: (ha-821265)     
	I0422 11:06:37.475584   27717 main.go:141] libmachine: (ha-821265)   </devices>
	I0422 11:06:37.475590   27717 main.go:141] libmachine: (ha-821265) </domain>
	I0422 11:06:37.475604   27717 main.go:141] libmachine: (ha-821265) 
	I0422 11:06:37.479726   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:97:60:69 in network default
	I0422 11:06:37.480316   27717 main.go:141] libmachine: (ha-821265) Ensuring networks are active...
	I0422 11:06:37.480339   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:06:37.480961   27717 main.go:141] libmachine: (ha-821265) Ensuring network default is active
	I0422 11:06:37.481262   27717 main.go:141] libmachine: (ha-821265) Ensuring network mk-ha-821265 is active
	I0422 11:06:37.481907   27717 main.go:141] libmachine: (ha-821265) Getting domain xml...
	I0422 11:06:37.482822   27717 main.go:141] libmachine: (ha-821265) Creating domain...
	I0422 11:06:38.657377   27717 main.go:141] libmachine: (ha-821265) Waiting to get IP...
	I0422 11:06:38.658275   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:06:38.658715   27717 main.go:141] libmachine: (ha-821265) DBG | unable to find current IP address of domain ha-821265 in network mk-ha-821265
	I0422 11:06:38.658737   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:38.658676   27741 retry.go:31] will retry after 211.485012ms: waiting for machine to come up
	I0422 11:06:38.872231   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:06:38.872917   27717 main.go:141] libmachine: (ha-821265) DBG | unable to find current IP address of domain ha-821265 in network mk-ha-821265
	I0422 11:06:38.872945   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:38.872884   27741 retry.go:31] will retry after 241.351108ms: waiting for machine to come up
	I0422 11:06:39.116484   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:06:39.116967   27717 main.go:141] libmachine: (ha-821265) DBG | unable to find current IP address of domain ha-821265 in network mk-ha-821265
	I0422 11:06:39.117000   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:39.116940   27741 retry.go:31] will retry after 389.175984ms: waiting for machine to come up
	I0422 11:06:39.507595   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:06:39.508169   27717 main.go:141] libmachine: (ha-821265) DBG | unable to find current IP address of domain ha-821265 in network mk-ha-821265
	I0422 11:06:39.508210   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:39.508121   27741 retry.go:31] will retry after 609.240168ms: waiting for machine to come up
	I0422 11:06:40.118900   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:06:40.119459   27717 main.go:141] libmachine: (ha-821265) DBG | unable to find current IP address of domain ha-821265 in network mk-ha-821265
	I0422 11:06:40.119484   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:40.119402   27741 retry.go:31] will retry after 555.876003ms: waiting for machine to come up
	I0422 11:06:40.677408   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:06:40.677839   27717 main.go:141] libmachine: (ha-821265) DBG | unable to find current IP address of domain ha-821265 in network mk-ha-821265
	I0422 11:06:40.677871   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:40.677811   27741 retry.go:31] will retry after 871.14358ms: waiting for machine to come up
	I0422 11:06:41.550850   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:06:41.551347   27717 main.go:141] libmachine: (ha-821265) DBG | unable to find current IP address of domain ha-821265 in network mk-ha-821265
	I0422 11:06:41.551387   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:41.551291   27741 retry.go:31] will retry after 844.675065ms: waiting for machine to come up
	I0422 11:06:42.398045   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:06:42.398907   27717 main.go:141] libmachine: (ha-821265) DBG | unable to find current IP address of domain ha-821265 in network mk-ha-821265
	I0422 11:06:42.398927   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:42.398861   27741 retry.go:31] will retry after 1.2788083s: waiting for machine to come up
	I0422 11:06:43.679116   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:06:43.679655   27717 main.go:141] libmachine: (ha-821265) DBG | unable to find current IP address of domain ha-821265 in network mk-ha-821265
	I0422 11:06:43.679678   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:43.679614   27741 retry.go:31] will retry after 1.645587291s: waiting for machine to come up
	I0422 11:06:45.327170   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:06:45.327642   27717 main.go:141] libmachine: (ha-821265) DBG | unable to find current IP address of domain ha-821265 in network mk-ha-821265
	I0422 11:06:45.327673   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:45.327582   27741 retry.go:31] will retry after 2.226967378s: waiting for machine to come up
	I0422 11:06:47.556383   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:06:47.556947   27717 main.go:141] libmachine: (ha-821265) DBG | unable to find current IP address of domain ha-821265 in network mk-ha-821265
	I0422 11:06:47.556988   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:47.556898   27741 retry.go:31] will retry after 2.091166086s: waiting for machine to come up
	I0422 11:06:49.651078   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:06:49.651488   27717 main.go:141] libmachine: (ha-821265) DBG | unable to find current IP address of domain ha-821265 in network mk-ha-821265
	I0422 11:06:49.651511   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:49.651450   27741 retry.go:31] will retry after 2.605110739s: waiting for machine to come up
	I0422 11:06:52.257652   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:06:52.258160   27717 main.go:141] libmachine: (ha-821265) DBG | unable to find current IP address of domain ha-821265 in network mk-ha-821265
	I0422 11:06:52.258190   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:52.258110   27741 retry.go:31] will retry after 4.516549684s: waiting for machine to come up
	I0422 11:06:56.779760   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:06:56.780137   27717 main.go:141] libmachine: (ha-821265) DBG | unable to find current IP address of domain ha-821265 in network mk-ha-821265
	I0422 11:06:56.780164   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:56.780091   27741 retry.go:31] will retry after 4.448627626s: waiting for machine to come up
	I0422 11:07:01.233713   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:01.234234   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has current primary IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:01.234260   27717 main.go:141] libmachine: (ha-821265) Found IP for machine: 192.168.39.150
	I0422 11:07:01.234273   27717 main.go:141] libmachine: (ha-821265) Reserving static IP address...
	I0422 11:07:01.234681   27717 main.go:141] libmachine: (ha-821265) DBG | unable to find host DHCP lease matching {name: "ha-821265", mac: "52:54:00:17:f6:ad", ip: "192.168.39.150"} in network mk-ha-821265
	I0422 11:07:01.307403   27717 main.go:141] libmachine: (ha-821265) Reserved static IP address: 192.168.39.150
	I0422 11:07:01.307430   27717 main.go:141] libmachine: (ha-821265) DBG | Getting to WaitForSSH function...
	I0422 11:07:01.307436   27717 main.go:141] libmachine: (ha-821265) Waiting for SSH to be available...
	I0422 11:07:01.309929   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:01.310292   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:minikube Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:01.310345   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:01.310399   27717 main.go:141] libmachine: (ha-821265) DBG | Using SSH client type: external
	I0422 11:07:01.310418   27717 main.go:141] libmachine: (ha-821265) DBG | Using SSH private key: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa (-rw-------)
	I0422 11:07:01.310442   27717 main.go:141] libmachine: (ha-821265) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.150 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 11:07:01.310453   27717 main.go:141] libmachine: (ha-821265) DBG | About to run SSH command:
	I0422 11:07:01.310464   27717 main.go:141] libmachine: (ha-821265) DBG | exit 0
	I0422 11:07:01.433232   27717 main.go:141] libmachine: (ha-821265) DBG | SSH cmd err, output: <nil>: 
	I0422 11:07:01.433519   27717 main.go:141] libmachine: (ha-821265) KVM machine creation complete!
	I0422 11:07:01.433809   27717 main.go:141] libmachine: (ha-821265) Calling .GetConfigRaw
	I0422 11:07:01.434391   27717 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:07:01.434626   27717 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:07:01.434811   27717 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0422 11:07:01.434825   27717 main.go:141] libmachine: (ha-821265) Calling .GetState
	I0422 11:07:01.436050   27717 main.go:141] libmachine: Detecting operating system of created instance...
	I0422 11:07:01.436068   27717 main.go:141] libmachine: Waiting for SSH to be available...
	I0422 11:07:01.436076   27717 main.go:141] libmachine: Getting to WaitForSSH function...
	I0422 11:07:01.436085   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:07:01.438380   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:01.438805   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:01.438848   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:01.438944   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:07:01.439121   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:01.439270   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:01.439408   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:07:01.439539   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:07:01.439790   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0422 11:07:01.439804   27717 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0422 11:07:01.544455   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 11:07:01.544479   27717 main.go:141] libmachine: Detecting the provisioner...
	I0422 11:07:01.544486   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:07:01.546915   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:01.547250   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:01.547278   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:01.547470   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:07:01.547664   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:01.547824   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:01.547962   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:07:01.548112   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:07:01.548272   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0422 11:07:01.548288   27717 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0422 11:07:01.650193   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0422 11:07:01.650278   27717 main.go:141] libmachine: found compatible host: buildroot
	I0422 11:07:01.650290   27717 main.go:141] libmachine: Provisioning with buildroot...
	I0422 11:07:01.650297   27717 main.go:141] libmachine: (ha-821265) Calling .GetMachineName
	I0422 11:07:01.650556   27717 buildroot.go:166] provisioning hostname "ha-821265"
	I0422 11:07:01.650577   27717 main.go:141] libmachine: (ha-821265) Calling .GetMachineName
	I0422 11:07:01.650758   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:07:01.653134   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:01.653592   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:01.653639   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:01.653745   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:07:01.653930   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:01.654084   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:01.654218   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:07:01.654365   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:07:01.654559   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0422 11:07:01.654571   27717 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-821265 && echo "ha-821265" | sudo tee /etc/hostname
	I0422 11:07:01.772663   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-821265
	
	I0422 11:07:01.772688   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:07:01.775340   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:01.775659   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:01.775679   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:01.775818   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:07:01.776056   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:01.776210   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:01.776442   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:07:01.776604   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:07:01.776812   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0422 11:07:01.776835   27717 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-821265' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-821265/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-821265' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 11:07:01.892321   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 11:07:01.892350   27717 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18711-7633/.minikube CaCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18711-7633/.minikube}
	I0422 11:07:01.892389   27717 buildroot.go:174] setting up certificates
	I0422 11:07:01.892400   27717 provision.go:84] configureAuth start
	I0422 11:07:01.892411   27717 main.go:141] libmachine: (ha-821265) Calling .GetMachineName
	I0422 11:07:01.892751   27717 main.go:141] libmachine: (ha-821265) Calling .GetIP
	I0422 11:07:01.895459   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:01.895794   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:01.895817   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:01.895959   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:07:01.898184   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:01.898552   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:01.898586   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:01.898659   27717 provision.go:143] copyHostCerts
	I0422 11:07:01.898687   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem
	I0422 11:07:01.898718   27717 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem, removing ...
	I0422 11:07:01.898726   27717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem
	I0422 11:07:01.898799   27717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem (1123 bytes)
	I0422 11:07:01.898897   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem
	I0422 11:07:01.898919   27717 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem, removing ...
	I0422 11:07:01.898924   27717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem
	I0422 11:07:01.898951   27717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem (1679 bytes)
	I0422 11:07:01.899003   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem
	I0422 11:07:01.899019   27717 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem, removing ...
	I0422 11:07:01.899023   27717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem
	I0422 11:07:01.899043   27717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem (1078 bytes)
	I0422 11:07:01.899099   27717 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem org=jenkins.ha-821265 san=[127.0.0.1 192.168.39.150 ha-821265 localhost minikube]
	I0422 11:07:02.062780   27717 provision.go:177] copyRemoteCerts
	I0422 11:07:02.062837   27717 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 11:07:02.062858   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:07:02.065745   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.065962   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:02.065993   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.066162   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:07:02.066359   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:02.066480   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:07:02.066589   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:07:02.151942   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0422 11:07:02.152000   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 11:07:02.183472   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0422 11:07:02.183535   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 11:07:02.211692   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0422 11:07:02.211752   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0422 11:07:02.239699   27717 provision.go:87] duration metric: took 347.283555ms to configureAuth
	I0422 11:07:02.239773   27717 buildroot.go:189] setting minikube options for container-runtime
	I0422 11:07:02.239979   27717 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:07:02.240061   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:07:02.242574   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.243051   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:02.243079   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.243250   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:07:02.243385   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:02.243491   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:02.243634   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:07:02.243784   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:07:02.243942   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0422 11:07:02.243959   27717 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 11:07:02.531139   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 11:07:02.531181   27717 main.go:141] libmachine: Checking connection to Docker...
	I0422 11:07:02.531192   27717 main.go:141] libmachine: (ha-821265) Calling .GetURL
	I0422 11:07:02.532667   27717 main.go:141] libmachine: (ha-821265) DBG | Using libvirt version 6000000
	I0422 11:07:02.534749   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.535091   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:02.535122   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.535303   27717 main.go:141] libmachine: Docker is up and running!
	I0422 11:07:02.535317   27717 main.go:141] libmachine: Reticulating splines...
	I0422 11:07:02.535326   27717 client.go:171] duration metric: took 25.525404418s to LocalClient.Create
	I0422 11:07:02.535352   27717 start.go:167] duration metric: took 25.525468272s to libmachine.API.Create "ha-821265"
	I0422 11:07:02.535364   27717 start.go:293] postStartSetup for "ha-821265" (driver="kvm2")
	I0422 11:07:02.535378   27717 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 11:07:02.535399   27717 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:07:02.535670   27717 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 11:07:02.535716   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:07:02.538379   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.538870   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:02.538899   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.539053   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:07:02.539264   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:02.539395   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:07:02.539530   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:07:02.620393   27717 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 11:07:02.625633   27717 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 11:07:02.625662   27717 filesync.go:126] Scanning /home/jenkins/minikube-integration/18711-7633/.minikube/addons for local assets ...
	I0422 11:07:02.625722   27717 filesync.go:126] Scanning /home/jenkins/minikube-integration/18711-7633/.minikube/files for local assets ...
	I0422 11:07:02.625820   27717 filesync.go:149] local asset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> 149452.pem in /etc/ssl/certs
	I0422 11:07:02.625837   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> /etc/ssl/certs/149452.pem
	I0422 11:07:02.625958   27717 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 11:07:02.636656   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem --> /etc/ssl/certs/149452.pem (1708 bytes)
	I0422 11:07:02.664641   27717 start.go:296] duration metric: took 129.264119ms for postStartSetup
	I0422 11:07:02.664684   27717 main.go:141] libmachine: (ha-821265) Calling .GetConfigRaw
	I0422 11:07:02.665249   27717 main.go:141] libmachine: (ha-821265) Calling .GetIP
	I0422 11:07:02.668184   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.668719   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:02.668744   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.669026   27717 profile.go:143] Saving config to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/config.json ...
	I0422 11:07:02.669246   27717 start.go:128] duration metric: took 25.677224027s to createHost
	I0422 11:07:02.669273   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:07:02.671508   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.671839   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:02.671866   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.672015   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:07:02.672199   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:02.672380   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:02.672552   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:07:02.672795   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:07:02.673000   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0422 11:07:02.673016   27717 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 11:07:02.774061   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713784022.746888880
	
	I0422 11:07:02.774083   27717 fix.go:216] guest clock: 1713784022.746888880
	I0422 11:07:02.774089   27717 fix.go:229] Guest: 2024-04-22 11:07:02.74688888 +0000 UTC Remote: 2024-04-22 11:07:02.669261285 +0000 UTC m=+25.795587930 (delta=77.627595ms)
	I0422 11:07:02.774108   27717 fix.go:200] guest clock delta is within tolerance: 77.627595ms
	I0422 11:07:02.774113   27717 start.go:83] releasing machines lock for "ha-821265", held for 25.78216251s
	I0422 11:07:02.774131   27717 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:07:02.774387   27717 main.go:141] libmachine: (ha-821265) Calling .GetIP
	I0422 11:07:02.777343   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.777706   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:02.777743   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.777889   27717 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:07:02.778565   27717 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:07:02.778741   27717 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:07:02.778837   27717 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 11:07:02.778891   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:07:02.779131   27717 ssh_runner.go:195] Run: cat /version.json
	I0422 11:07:02.779154   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:07:02.781537   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.781682   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.781775   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:02.781800   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.781936   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:07:02.782065   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:02.782090   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:02.782115   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.782222   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:07:02.782356   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:07:02.782358   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:02.782523   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:07:02.782541   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:07:02.782660   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:07:02.858772   27717 ssh_runner.go:195] Run: systemctl --version
	I0422 11:07:02.884766   27717 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 11:07:03.053932   27717 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 11:07:03.060762   27717 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 11:07:03.060845   27717 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 11:07:03.079663   27717 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 11:07:03.079695   27717 start.go:494] detecting cgroup driver to use...
	I0422 11:07:03.079752   27717 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 11:07:03.099187   27717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 11:07:03.114267   27717 docker.go:217] disabling cri-docker service (if available) ...
	I0422 11:07:03.114320   27717 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 11:07:03.128831   27717 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 11:07:03.143117   27717 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 11:07:03.264431   27717 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 11:07:03.410999   27717 docker.go:233] disabling docker service ...
	I0422 11:07:03.411066   27717 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 11:07:03.427738   27717 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 11:07:03.442992   27717 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 11:07:03.590020   27717 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 11:07:03.724028   27717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 11:07:03.739776   27717 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 11:07:03.760494   27717 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 11:07:03.760566   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:07:03.771686   27717 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 11:07:03.771757   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:07:03.782899   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:07:03.793763   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:07:03.804969   27717 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 11:07:03.816456   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:07:03.827577   27717 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:07:03.847124   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:07:03.858582   27717 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 11:07:03.868759   27717 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 11:07:03.868819   27717 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 11:07:03.884219   27717 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 11:07:03.895147   27717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 11:07:04.024227   27717 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 11:07:04.169763   27717 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 11:07:04.169845   27717 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 11:07:04.175638   27717 start.go:562] Will wait 60s for crictl version
	I0422 11:07:04.175690   27717 ssh_runner.go:195] Run: which crictl
	I0422 11:07:04.179988   27717 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 11:07:04.226582   27717 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 11:07:04.226660   27717 ssh_runner.go:195] Run: crio --version
	I0422 11:07:04.257365   27717 ssh_runner.go:195] Run: crio --version
	I0422 11:07:04.295410   27717 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 11:07:04.296625   27717 main.go:141] libmachine: (ha-821265) Calling .GetIP
	I0422 11:07:04.299600   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:04.299932   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:04.299962   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:04.300216   27717 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0422 11:07:04.304879   27717 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 11:07:04.320479   27717 kubeadm.go:877] updating cluster {Name:ha-821265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-821265 Namespace:
default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 11:07:04.320578   27717 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 11:07:04.320620   27717 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 11:07:04.356973   27717 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0422 11:07:04.357047   27717 ssh_runner.go:195] Run: which lz4
	I0422 11:07:04.361530   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0422 11:07:04.361631   27717 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0422 11:07:04.366278   27717 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0422 11:07:04.366307   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0422 11:07:06.042466   27717 crio.go:462] duration metric: took 1.680867865s to copy over tarball
	I0422 11:07:06.042549   27717 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0422 11:07:08.517152   27717 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.474581288s)
	I0422 11:07:08.517178   27717 crio.go:469] duration metric: took 2.474670403s to extract the tarball
	I0422 11:07:08.517185   27717 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0422 11:07:08.557848   27717 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 11:07:08.614549   27717 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 11:07:08.614573   27717 cache_images.go:84] Images are preloaded, skipping loading
	I0422 11:07:08.614580   27717 kubeadm.go:928] updating node { 192.168.39.150 8443 v1.30.0 crio true true} ...
	I0422 11:07:08.614696   27717 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-821265 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-821265 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 11:07:08.614771   27717 ssh_runner.go:195] Run: crio config
	I0422 11:07:08.672421   27717 cni.go:84] Creating CNI manager for ""
	I0422 11:07:08.672449   27717 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0422 11:07:08.672466   27717 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 11:07:08.672491   27717 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.150 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-821265 NodeName:ha-821265 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 11:07:08.672663   27717 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.150
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-821265"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.150
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.150"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 11:07:08.672692   27717 kube-vip.go:111] generating kube-vip config ...
	I0422 11:07:08.672740   27717 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0422 11:07:08.691071   27717 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0422 11:07:08.691194   27717 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0422 11:07:08.691255   27717 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 11:07:08.703581   27717 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 11:07:08.703648   27717 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0422 11:07:08.715654   27717 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0422 11:07:08.735131   27717 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 11:07:08.754255   27717 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0422 11:07:08.773720   27717 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0422 11:07:08.792889   27717 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0422 11:07:08.797695   27717 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 11:07:08.813352   27717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 11:07:08.956712   27717 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 11:07:08.976494   27717 certs.go:68] Setting up /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265 for IP: 192.168.39.150
	I0422 11:07:08.976540   27717 certs.go:194] generating shared ca certs ...
	I0422 11:07:08.976559   27717 certs.go:226] acquiring lock for ca certs: {Name:mk0b77082b88c771d0b00be5267ca31dfee6f85a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:07:08.976742   27717 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key
	I0422 11:07:08.976832   27717 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key
	I0422 11:07:08.976847   27717 certs.go:256] generating profile certs ...
	I0422 11:07:08.976914   27717 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/client.key
	I0422 11:07:08.976930   27717 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/client.crt with IP's: []
	I0422 11:07:09.418231   27717 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/client.crt ...
	I0422 11:07:09.418257   27717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/client.crt: {Name:mk52952f8b4db593aadb2c250839f7b574f97019 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:07:09.418416   27717 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/client.key ...
	I0422 11:07:09.418426   27717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/client.key: {Name:mk8d80f7827aef3d1fd632a27cf705619b9e8dd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:07:09.418497   27717 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key.9e670a0c
	I0422 11:07:09.418511   27717 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt.9e670a0c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.150 192.168.39.254]
	I0422 11:07:09.559977   27717 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt.9e670a0c ...
	I0422 11:07:09.560006   27717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt.9e670a0c: {Name:mk0789273f8824637744f6bccf5e25fe0c785651 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:07:09.560146   27717 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key.9e670a0c ...
	I0422 11:07:09.560159   27717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key.9e670a0c: {Name:mkd7c463326ca403ace533aedb196950306b2956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:07:09.560244   27717 certs.go:381] copying /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt.9e670a0c -> /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt
	I0422 11:07:09.560313   27717 certs.go:385] copying /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key.9e670a0c -> /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key
	I0422 11:07:09.560361   27717 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.key
	I0422 11:07:09.560375   27717 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.crt with IP's: []
	I0422 11:07:09.686192   27717 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.crt ...
	I0422 11:07:09.686223   27717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.crt: {Name:mkdcbe0e829b44ac15262334df2d0ec129d534bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:07:09.686384   27717 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.key ...
	I0422 11:07:09.686394   27717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.key: {Name:mk898c3151cb501a42e5a95c8238e1c668504887 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:07:09.686466   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0422 11:07:09.686483   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0422 11:07:09.686492   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0422 11:07:09.686510   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0422 11:07:09.686523   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0422 11:07:09.686541   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0422 11:07:09.686558   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0422 11:07:09.686570   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0422 11:07:09.686618   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem (1338 bytes)
	W0422 11:07:09.686664   27717 certs.go:480] ignoring /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945_empty.pem, impossibly tiny 0 bytes
	I0422 11:07:09.686673   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem (1679 bytes)
	I0422 11:07:09.686693   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem (1078 bytes)
	I0422 11:07:09.686717   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem (1123 bytes)
	I0422 11:07:09.686740   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem (1679 bytes)
	I0422 11:07:09.686778   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem (1708 bytes)
	I0422 11:07:09.686801   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem -> /usr/share/ca-certificates/14945.pem
	I0422 11:07:09.686814   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> /usr/share/ca-certificates/149452.pem
	I0422 11:07:09.686826   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:07:09.687394   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 11:07:09.719007   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 11:07:09.752361   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 11:07:09.786230   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0422 11:07:09.826254   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0422 11:07:09.854135   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 11:07:09.880891   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 11:07:09.908450   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 11:07:09.937772   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem --> /usr/share/ca-certificates/14945.pem (1338 bytes)
	I0422 11:07:09.964957   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem --> /usr/share/ca-certificates/149452.pem (1708 bytes)
	I0422 11:07:09.992047   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 11:07:10.018680   27717 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 11:07:10.039838   27717 ssh_runner.go:195] Run: openssl version
	I0422 11:07:10.046726   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149452.pem && ln -fs /usr/share/ca-certificates/149452.pem /etc/ssl/certs/149452.pem"
	I0422 11:07:10.059914   27717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149452.pem
	I0422 11:07:10.065454   27717 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 10:51 /usr/share/ca-certificates/149452.pem
	I0422 11:07:10.065516   27717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149452.pem
	I0422 11:07:10.072114   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149452.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 11:07:10.085601   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 11:07:10.098686   27717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:07:10.103949   27717 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:07:10.104023   27717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:07:10.110511   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 11:07:10.123334   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14945.pem && ln -fs /usr/share/ca-certificates/14945.pem /etc/ssl/certs/14945.pem"
	I0422 11:07:10.136466   27717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14945.pem
	I0422 11:07:10.141665   27717 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 10:51 /usr/share/ca-certificates/14945.pem
	I0422 11:07:10.141714   27717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14945.pem
	I0422 11:07:10.148031   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14945.pem /etc/ssl/certs/51391683.0"
	I0422 11:07:10.161327   27717 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 11:07:10.165970   27717 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0422 11:07:10.166014   27717 kubeadm.go:391] StartCluster: {Name:ha-821265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-821265 Namespace:def
ault APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 11:07:10.166086   27717 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 11:07:10.166125   27717 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 11:07:10.211220   27717 cri.go:89] found id: ""
	I0422 11:07:10.211280   27717 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0422 11:07:10.223189   27717 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 11:07:10.234230   27717 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 11:07:10.245366   27717 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 11:07:10.245396   27717 kubeadm.go:156] found existing configuration files:
	
	I0422 11:07:10.245436   27717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 11:07:10.255832   27717 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 11:07:10.255887   27717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 11:07:10.267058   27717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 11:07:10.278221   27717 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 11:07:10.278286   27717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 11:07:10.289797   27717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 11:07:10.300487   27717 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 11:07:10.300547   27717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 11:07:10.311149   27717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 11:07:10.321852   27717 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 11:07:10.321927   27717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 11:07:10.333728   27717 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 11:07:10.440238   27717 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0422 11:07:10.440307   27717 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 11:07:10.608397   27717 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 11:07:10.608523   27717 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 11:07:10.608647   27717 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 11:07:10.850748   27717 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 11:07:10.960280   27717 out.go:204]   - Generating certificates and keys ...
	I0422 11:07:10.960408   27717 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 11:07:10.960497   27717 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 11:07:11.181371   27717 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0422 11:07:11.287702   27717 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0422 11:07:11.629487   27717 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0422 11:07:11.731677   27717 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0422 11:07:11.859817   27717 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0422 11:07:11.860017   27717 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-821265 localhost] and IPs [192.168.39.150 127.0.0.1 ::1]
	I0422 11:07:11.948558   27717 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0422 11:07:12.006501   27717 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-821265 localhost] and IPs [192.168.39.150 127.0.0.1 ::1]
	I0422 11:07:12.447883   27717 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0422 11:07:12.714302   27717 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0422 11:07:12.795236   27717 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0422 11:07:12.795355   27717 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 11:07:12.956592   27717 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 11:07:13.238680   27717 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0422 11:07:13.406825   27717 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 11:07:13.748333   27717 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 11:07:14.012055   27717 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 11:07:14.012755   27717 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 11:07:14.016020   27717 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 11:07:14.019738   27717 out.go:204]   - Booting up control plane ...
	I0422 11:07:14.019859   27717 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 11:07:14.019984   27717 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 11:07:14.020069   27717 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 11:07:14.039659   27717 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 11:07:14.042042   27717 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 11:07:14.042100   27717 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 11:07:14.176609   27717 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0422 11:07:14.176741   27717 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0422 11:07:15.178523   27717 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002710448s
	I0422 11:07:15.178595   27717 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0422 11:07:20.847642   27717 kubeadm.go:309] [api-check] The API server is healthy after 5.67237434s
	I0422 11:07:20.860726   27717 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0422 11:07:20.884143   27717 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0422 11:07:20.912922   27717 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0422 11:07:20.913118   27717 kubeadm.go:309] [mark-control-plane] Marking the node ha-821265 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0422 11:07:20.928649   27717 kubeadm.go:309] [bootstrap-token] Using token: yuo67z.grhhzrpl1n2nxox8
	I0422 11:07:20.930298   27717 out.go:204]   - Configuring RBAC rules ...
	I0422 11:07:20.930431   27717 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0422 11:07:20.937411   27717 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0422 11:07:20.948557   27717 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0422 11:07:20.952520   27717 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0422 11:07:20.956537   27717 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0422 11:07:20.959717   27717 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0422 11:07:21.255254   27717 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0422 11:07:21.708154   27717 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0422 11:07:22.262044   27717 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0422 11:07:22.262081   27717 kubeadm.go:309] 
	I0422 11:07:22.262177   27717 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0422 11:07:22.262190   27717 kubeadm.go:309] 
	I0422 11:07:22.262284   27717 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0422 11:07:22.262297   27717 kubeadm.go:309] 
	I0422 11:07:22.262352   27717 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0422 11:07:22.262427   27717 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0422 11:07:22.262507   27717 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0422 11:07:22.262520   27717 kubeadm.go:309] 
	I0422 11:07:22.262601   27717 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0422 11:07:22.262616   27717 kubeadm.go:309] 
	I0422 11:07:22.262689   27717 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0422 11:07:22.262707   27717 kubeadm.go:309] 
	I0422 11:07:22.262785   27717 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0422 11:07:22.262890   27717 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0422 11:07:22.262998   27717 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0422 11:07:22.263012   27717 kubeadm.go:309] 
	I0422 11:07:22.263130   27717 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0422 11:07:22.263234   27717 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0422 11:07:22.263247   27717 kubeadm.go:309] 
	I0422 11:07:22.263369   27717 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token yuo67z.grhhzrpl1n2nxox8 \
	I0422 11:07:22.263515   27717 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9af0990320e7d045d6d22d7e7276e7343f7b0ca44ae792f4916b4a79d803646f \
	I0422 11:07:22.263553   27717 kubeadm.go:309] 	--control-plane 
	I0422 11:07:22.263562   27717 kubeadm.go:309] 
	I0422 11:07:22.263661   27717 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0422 11:07:22.263672   27717 kubeadm.go:309] 
	I0422 11:07:22.263808   27717 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token yuo67z.grhhzrpl1n2nxox8 \
	I0422 11:07:22.263949   27717 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9af0990320e7d045d6d22d7e7276e7343f7b0ca44ae792f4916b4a79d803646f 
	I0422 11:07:22.264112   27717 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 11:07:22.264148   27717 cni.go:84] Creating CNI manager for ""
	I0422 11:07:22.264162   27717 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0422 11:07:22.266062   27717 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0422 11:07:22.267446   27717 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0422 11:07:22.273513   27717 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0422 11:07:22.273527   27717 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0422 11:07:22.292914   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0422 11:07:22.660410   27717 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 11:07:22.660535   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:22.660539   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-821265 minikube.k8s.io/updated_at=2024_04_22T11_07_22_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437 minikube.k8s.io/name=ha-821265 minikube.k8s.io/primary=true
	I0422 11:07:22.693421   27717 ops.go:34] apiserver oom_adj: -16
	I0422 11:07:22.849804   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:23.350189   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:23.850615   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:24.350927   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:24.849910   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:25.350015   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:25.850250   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:26.350024   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:26.850681   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:27.349901   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:27.850740   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:28.350694   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:28.850222   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:29.349986   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:29.850702   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:30.350742   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:30.850442   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:31.349847   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:31.850029   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:32.349941   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:32.850871   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:33.350491   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:33.849979   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:34.349942   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:34.850271   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:35.350692   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:35.850705   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:36.350530   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:36.850731   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:37.350006   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:37.492355   27717 kubeadm.go:1107] duration metric: took 14.831883199s to wait for elevateKubeSystemPrivileges
	W0422 11:07:37.492403   27717 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0422 11:07:37.492412   27717 kubeadm.go:393] duration metric: took 27.326400295s to StartCluster
	I0422 11:07:37.492431   27717 settings.go:142] acquiring lock: {Name:mkd680667f0df4166491741d55b55ac111bb0138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:07:37.492511   27717 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18711-7633/kubeconfig
	I0422 11:07:37.493319   27717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/kubeconfig: {Name:mkee6de4c6906fe5621e8aeac858a93219648db5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:07:37.493562   27717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0422 11:07:37.493580   27717 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0422 11:07:37.493634   27717 addons.go:69] Setting storage-provisioner=true in profile "ha-821265"
	I0422 11:07:37.493659   27717 addons.go:69] Setting default-storageclass=true in profile "ha-821265"
	I0422 11:07:37.493705   27717 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-821265"
	I0422 11:07:37.493739   27717 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:07:37.493562   27717 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 11:07:37.493785   27717 start.go:240] waiting for startup goroutines ...
	I0422 11:07:37.493664   27717 addons.go:234] Setting addon storage-provisioner=true in "ha-821265"
	I0422 11:07:37.493831   27717 host.go:66] Checking if "ha-821265" exists ...
	I0422 11:07:37.494037   27717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:07:37.494059   27717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:07:37.494223   27717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:07:37.494257   27717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:07:37.508555   27717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40121
	I0422 11:07:37.508611   27717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37983
	I0422 11:07:37.509008   27717 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:07:37.509046   27717 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:07:37.509515   27717 main.go:141] libmachine: Using API Version  1
	I0422 11:07:37.509535   27717 main.go:141] libmachine: Using API Version  1
	I0422 11:07:37.509545   27717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:07:37.509551   27717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:07:37.509906   27717 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:07:37.509946   27717 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:07:37.510119   27717 main.go:141] libmachine: (ha-821265) Calling .GetState
	I0422 11:07:37.510502   27717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:07:37.510536   27717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:07:37.512267   27717 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18711-7633/kubeconfig
	I0422 11:07:37.512584   27717 kapi.go:59] client config for ha-821265: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/client.crt", KeyFile:"/home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/client.key", CAFile:"/home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0422 11:07:37.513095   27717 cert_rotation.go:137] Starting client certificate rotation controller
	I0422 11:07:37.513356   27717 addons.go:234] Setting addon default-storageclass=true in "ha-821265"
	I0422 11:07:37.513400   27717 host.go:66] Checking if "ha-821265" exists ...
	I0422 11:07:37.513797   27717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:07:37.513844   27717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:07:37.526083   27717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43051
	I0422 11:07:37.526636   27717 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:07:37.527148   27717 main.go:141] libmachine: Using API Version  1
	I0422 11:07:37.527166   27717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:07:37.527494   27717 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:07:37.527677   27717 main.go:141] libmachine: (ha-821265) Calling .GetState
	I0422 11:07:37.527950   27717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45095
	I0422 11:07:37.528423   27717 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:07:37.528961   27717 main.go:141] libmachine: Using API Version  1
	I0422 11:07:37.528992   27717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:07:37.529325   27717 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:07:37.529334   27717 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:07:37.531582   27717 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 11:07:37.529873   27717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:07:37.533014   27717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:07:37.533096   27717 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 11:07:37.533112   27717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0422 11:07:37.533130   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:07:37.536149   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:37.536532   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:37.536564   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:37.536756   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:07:37.536999   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:37.537161   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:07:37.537326   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:07:37.547901   27717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34973
	I0422 11:07:37.548257   27717 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:07:37.548732   27717 main.go:141] libmachine: Using API Version  1
	I0422 11:07:37.548757   27717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:07:37.549107   27717 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:07:37.549292   27717 main.go:141] libmachine: (ha-821265) Calling .GetState
	I0422 11:07:37.550876   27717 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:07:37.551112   27717 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0422 11:07:37.551126   27717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0422 11:07:37.551142   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:07:37.553701   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:37.554028   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:37.554054   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:37.554172   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:07:37.554367   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:37.554512   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:07:37.554659   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:07:37.666344   27717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0422 11:07:37.683340   27717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 11:07:37.776823   27717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 11:07:38.264702   27717 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0422 11:07:38.584466   27717 main.go:141] libmachine: Making call to close driver server
	I0422 11:07:38.584490   27717 main.go:141] libmachine: (ha-821265) Calling .Close
	I0422 11:07:38.584489   27717 main.go:141] libmachine: Making call to close driver server
	I0422 11:07:38.584499   27717 main.go:141] libmachine: (ha-821265) Calling .Close
	I0422 11:07:38.584843   27717 main.go:141] libmachine: Successfully made call to close driver server
	I0422 11:07:38.584862   27717 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 11:07:38.584871   27717 main.go:141] libmachine: Making call to close driver server
	I0422 11:07:38.584878   27717 main.go:141] libmachine: (ha-821265) Calling .Close
	I0422 11:07:38.584892   27717 main.go:141] libmachine: (ha-821265) DBG | Closing plugin on server side
	I0422 11:07:38.584921   27717 main.go:141] libmachine: Successfully made call to close driver server
	I0422 11:07:38.584939   27717 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 11:07:38.584949   27717 main.go:141] libmachine: Making call to close driver server
	I0422 11:07:38.584960   27717 main.go:141] libmachine: (ha-821265) Calling .Close
	I0422 11:07:38.585165   27717 main.go:141] libmachine: Successfully made call to close driver server
	I0422 11:07:38.585186   27717 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 11:07:38.585245   27717 main.go:141] libmachine: (ha-821265) DBG | Closing plugin on server side
	I0422 11:07:38.585275   27717 main.go:141] libmachine: Successfully made call to close driver server
	I0422 11:07:38.585288   27717 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 11:07:38.585409   27717 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0422 11:07:38.585422   27717 round_trippers.go:469] Request Headers:
	I0422 11:07:38.585439   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:07:38.585446   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:07:38.599077   27717 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0422 11:07:38.599625   27717 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0422 11:07:38.599640   27717 round_trippers.go:469] Request Headers:
	I0422 11:07:38.599647   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:07:38.599651   27717 round_trippers.go:473]     Content-Type: application/json
	I0422 11:07:38.599653   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:07:38.602433   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:07:38.602601   27717 main.go:141] libmachine: Making call to close driver server
	I0422 11:07:38.602622   27717 main.go:141] libmachine: (ha-821265) Calling .Close
	I0422 11:07:38.602886   27717 main.go:141] libmachine: Successfully made call to close driver server
	I0422 11:07:38.602904   27717 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 11:07:38.602909   27717 main.go:141] libmachine: (ha-821265) DBG | Closing plugin on server side
	I0422 11:07:38.605613   27717 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0422 11:07:38.607072   27717 addons.go:505] duration metric: took 1.113487551s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0422 11:07:38.607108   27717 start.go:245] waiting for cluster config update ...
	I0422 11:07:38.607123   27717 start.go:254] writing updated cluster config ...
	I0422 11:07:38.608878   27717 out.go:177] 
	I0422 11:07:38.610515   27717 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:07:38.610586   27717 profile.go:143] Saving config to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/config.json ...
	I0422 11:07:38.612341   27717 out.go:177] * Starting "ha-821265-m02" control-plane node in "ha-821265" cluster
	I0422 11:07:38.613595   27717 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 11:07:38.613614   27717 cache.go:56] Caching tarball of preloaded images
	I0422 11:07:38.613693   27717 preload.go:173] Found /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 11:07:38.613733   27717 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 11:07:38.613804   27717 profile.go:143] Saving config to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/config.json ...
	I0422 11:07:38.613988   27717 start.go:360] acquireMachinesLock for ha-821265-m02: {Name:mk5cb9b294e703b264c1f97ac968ffd01e93b576 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 11:07:38.614031   27717 start.go:364] duration metric: took 23.705µs to acquireMachinesLock for "ha-821265-m02"
	I0422 11:07:38.614047   27717 start.go:93] Provisioning new machine with config: &{Name:ha-821265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:h
a-821265 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 11:07:38.614111   27717 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0422 11:07:38.615767   27717 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0422 11:07:38.615865   27717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:07:38.615894   27717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:07:38.630236   27717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43337
	I0422 11:07:38.630684   27717 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:07:38.631201   27717 main.go:141] libmachine: Using API Version  1
	I0422 11:07:38.631224   27717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:07:38.631528   27717 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:07:38.631771   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetMachineName
	I0422 11:07:38.631910   27717 main.go:141] libmachine: (ha-821265-m02) Calling .DriverName
	I0422 11:07:38.632051   27717 start.go:159] libmachine.API.Create for "ha-821265" (driver="kvm2")
	I0422 11:07:38.632075   27717 client.go:168] LocalClient.Create starting
	I0422 11:07:38.632097   27717 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem
	I0422 11:07:38.632124   27717 main.go:141] libmachine: Decoding PEM data...
	I0422 11:07:38.632140   27717 main.go:141] libmachine: Parsing certificate...
	I0422 11:07:38.632188   27717 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem
	I0422 11:07:38.632205   27717 main.go:141] libmachine: Decoding PEM data...
	I0422 11:07:38.632215   27717 main.go:141] libmachine: Parsing certificate...
	I0422 11:07:38.632228   27717 main.go:141] libmachine: Running pre-create checks...
	I0422 11:07:38.632236   27717 main.go:141] libmachine: (ha-821265-m02) Calling .PreCreateCheck
	I0422 11:07:38.632440   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetConfigRaw
	I0422 11:07:38.632832   27717 main.go:141] libmachine: Creating machine...
	I0422 11:07:38.632847   27717 main.go:141] libmachine: (ha-821265-m02) Calling .Create
	I0422 11:07:38.632966   27717 main.go:141] libmachine: (ha-821265-m02) Creating KVM machine...
	I0422 11:07:38.634262   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found existing default KVM network
	I0422 11:07:38.634429   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found existing private KVM network mk-ha-821265
	I0422 11:07:38.634613   27717 main.go:141] libmachine: (ha-821265-m02) Setting up store path in /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02 ...
	I0422 11:07:38.634637   27717 main.go:141] libmachine: (ha-821265-m02) Building disk image from file:///home/jenkins/minikube-integration/18711-7633/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0422 11:07:38.634697   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:38.634608   28122 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 11:07:38.634839   27717 main.go:141] libmachine: (ha-821265-m02) Downloading /home/jenkins/minikube-integration/18711-7633/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18711-7633/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0422 11:07:38.858903   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:38.858752   28122 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02/id_rsa...
	I0422 11:07:39.068919   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:39.068788   28122 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02/ha-821265-m02.rawdisk...
	I0422 11:07:39.068952   27717 main.go:141] libmachine: (ha-821265-m02) DBG | Writing magic tar header
	I0422 11:07:39.068966   27717 main.go:141] libmachine: (ha-821265-m02) DBG | Writing SSH key tar header
	I0422 11:07:39.068978   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:39.068894   28122 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02 ...
	I0422 11:07:39.068993   27717 main.go:141] libmachine: (ha-821265-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02
	I0422 11:07:39.069051   27717 main.go:141] libmachine: (ha-821265-m02) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02 (perms=drwx------)
	I0422 11:07:39.069084   27717 main.go:141] libmachine: (ha-821265-m02) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633/.minikube/machines (perms=drwxr-xr-x)
	I0422 11:07:39.069100   27717 main.go:141] libmachine: (ha-821265-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633/.minikube/machines
	I0422 11:07:39.069113   27717 main.go:141] libmachine: (ha-821265-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 11:07:39.069121   27717 main.go:141] libmachine: (ha-821265-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633
	I0422 11:07:39.069130   27717 main.go:141] libmachine: (ha-821265-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0422 11:07:39.069137   27717 main.go:141] libmachine: (ha-821265-m02) DBG | Checking permissions on dir: /home/jenkins
	I0422 11:07:39.069150   27717 main.go:141] libmachine: (ha-821265-m02) DBG | Checking permissions on dir: /home
	I0422 11:07:39.069168   27717 main.go:141] libmachine: (ha-821265-m02) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633/.minikube (perms=drwxr-xr-x)
	I0422 11:07:39.069179   27717 main.go:141] libmachine: (ha-821265-m02) DBG | Skipping /home - not owner
	I0422 11:07:39.069196   27717 main.go:141] libmachine: (ha-821265-m02) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633 (perms=drwxrwxr-x)
	I0422 11:07:39.069208   27717 main.go:141] libmachine: (ha-821265-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0422 11:07:39.069219   27717 main.go:141] libmachine: (ha-821265-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0422 11:07:39.069228   27717 main.go:141] libmachine: (ha-821265-m02) Creating domain...
	I0422 11:07:39.070037   27717 main.go:141] libmachine: (ha-821265-m02) define libvirt domain using xml: 
	I0422 11:07:39.070052   27717 main.go:141] libmachine: (ha-821265-m02) <domain type='kvm'>
	I0422 11:07:39.070061   27717 main.go:141] libmachine: (ha-821265-m02)   <name>ha-821265-m02</name>
	I0422 11:07:39.070067   27717 main.go:141] libmachine: (ha-821265-m02)   <memory unit='MiB'>2200</memory>
	I0422 11:07:39.070075   27717 main.go:141] libmachine: (ha-821265-m02)   <vcpu>2</vcpu>
	I0422 11:07:39.070085   27717 main.go:141] libmachine: (ha-821265-m02)   <features>
	I0422 11:07:39.070099   27717 main.go:141] libmachine: (ha-821265-m02)     <acpi/>
	I0422 11:07:39.070109   27717 main.go:141] libmachine: (ha-821265-m02)     <apic/>
	I0422 11:07:39.070121   27717 main.go:141] libmachine: (ha-821265-m02)     <pae/>
	I0422 11:07:39.070137   27717 main.go:141] libmachine: (ha-821265-m02)     
	I0422 11:07:39.070149   27717 main.go:141] libmachine: (ha-821265-m02)   </features>
	I0422 11:07:39.070165   27717 main.go:141] libmachine: (ha-821265-m02)   <cpu mode='host-passthrough'>
	I0422 11:07:39.070176   27717 main.go:141] libmachine: (ha-821265-m02)   
	I0422 11:07:39.070184   27717 main.go:141] libmachine: (ha-821265-m02)   </cpu>
	I0422 11:07:39.070194   27717 main.go:141] libmachine: (ha-821265-m02)   <os>
	I0422 11:07:39.070205   27717 main.go:141] libmachine: (ha-821265-m02)     <type>hvm</type>
	I0422 11:07:39.070217   27717 main.go:141] libmachine: (ha-821265-m02)     <boot dev='cdrom'/>
	I0422 11:07:39.070229   27717 main.go:141] libmachine: (ha-821265-m02)     <boot dev='hd'/>
	I0422 11:07:39.070241   27717 main.go:141] libmachine: (ha-821265-m02)     <bootmenu enable='no'/>
	I0422 11:07:39.070253   27717 main.go:141] libmachine: (ha-821265-m02)   </os>
	I0422 11:07:39.070263   27717 main.go:141] libmachine: (ha-821265-m02)   <devices>
	I0422 11:07:39.070274   27717 main.go:141] libmachine: (ha-821265-m02)     <disk type='file' device='cdrom'>
	I0422 11:07:39.070289   27717 main.go:141] libmachine: (ha-821265-m02)       <source file='/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02/boot2docker.iso'/>
	I0422 11:07:39.070301   27717 main.go:141] libmachine: (ha-821265-m02)       <target dev='hdc' bus='scsi'/>
	I0422 11:07:39.070314   27717 main.go:141] libmachine: (ha-821265-m02)       <readonly/>
	I0422 11:07:39.070324   27717 main.go:141] libmachine: (ha-821265-m02)     </disk>
	I0422 11:07:39.070348   27717 main.go:141] libmachine: (ha-821265-m02)     <disk type='file' device='disk'>
	I0422 11:07:39.070373   27717 main.go:141] libmachine: (ha-821265-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0422 11:07:39.070391   27717 main.go:141] libmachine: (ha-821265-m02)       <source file='/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02/ha-821265-m02.rawdisk'/>
	I0422 11:07:39.070410   27717 main.go:141] libmachine: (ha-821265-m02)       <target dev='hda' bus='virtio'/>
	I0422 11:07:39.070422   27717 main.go:141] libmachine: (ha-821265-m02)     </disk>
	I0422 11:07:39.070437   27717 main.go:141] libmachine: (ha-821265-m02)     <interface type='network'>
	I0422 11:07:39.070447   27717 main.go:141] libmachine: (ha-821265-m02)       <source network='mk-ha-821265'/>
	I0422 11:07:39.070458   27717 main.go:141] libmachine: (ha-821265-m02)       <model type='virtio'/>
	I0422 11:07:39.070470   27717 main.go:141] libmachine: (ha-821265-m02)     </interface>
	I0422 11:07:39.070481   27717 main.go:141] libmachine: (ha-821265-m02)     <interface type='network'>
	I0422 11:07:39.070492   27717 main.go:141] libmachine: (ha-821265-m02)       <source network='default'/>
	I0422 11:07:39.070502   27717 main.go:141] libmachine: (ha-821265-m02)       <model type='virtio'/>
	I0422 11:07:39.070513   27717 main.go:141] libmachine: (ha-821265-m02)     </interface>
	I0422 11:07:39.070527   27717 main.go:141] libmachine: (ha-821265-m02)     <serial type='pty'>
	I0422 11:07:39.070535   27717 main.go:141] libmachine: (ha-821265-m02)       <target port='0'/>
	I0422 11:07:39.070545   27717 main.go:141] libmachine: (ha-821265-m02)     </serial>
	I0422 11:07:39.070557   27717 main.go:141] libmachine: (ha-821265-m02)     <console type='pty'>
	I0422 11:07:39.070569   27717 main.go:141] libmachine: (ha-821265-m02)       <target type='serial' port='0'/>
	I0422 11:07:39.070581   27717 main.go:141] libmachine: (ha-821265-m02)     </console>
	I0422 11:07:39.070594   27717 main.go:141] libmachine: (ha-821265-m02)     <rng model='virtio'>
	I0422 11:07:39.070608   27717 main.go:141] libmachine: (ha-821265-m02)       <backend model='random'>/dev/random</backend>
	I0422 11:07:39.070616   27717 main.go:141] libmachine: (ha-821265-m02)     </rng>
	I0422 11:07:39.070624   27717 main.go:141] libmachine: (ha-821265-m02)     
	I0422 11:07:39.070635   27717 main.go:141] libmachine: (ha-821265-m02)     
	I0422 11:07:39.070648   27717 main.go:141] libmachine: (ha-821265-m02)   </devices>
	I0422 11:07:39.070658   27717 main.go:141] libmachine: (ha-821265-m02) </domain>
	I0422 11:07:39.070697   27717 main.go:141] libmachine: (ha-821265-m02) 
	I0422 11:07:39.076687   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:7d:91:8e in network default
	I0422 11:07:39.077253   27717 main.go:141] libmachine: (ha-821265-m02) Ensuring networks are active...
	I0422 11:07:39.077271   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:39.078064   27717 main.go:141] libmachine: (ha-821265-m02) Ensuring network default is active
	I0422 11:07:39.078404   27717 main.go:141] libmachine: (ha-821265-m02) Ensuring network mk-ha-821265 is active
	I0422 11:07:39.078879   27717 main.go:141] libmachine: (ha-821265-m02) Getting domain xml...
	I0422 11:07:39.079496   27717 main.go:141] libmachine: (ha-821265-m02) Creating domain...
	I0422 11:07:40.281067   27717 main.go:141] libmachine: (ha-821265-m02) Waiting to get IP...
	I0422 11:07:40.281872   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:40.282331   27717 main.go:141] libmachine: (ha-821265-m02) DBG | unable to find current IP address of domain ha-821265-m02 in network mk-ha-821265
	I0422 11:07:40.282378   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:40.282308   28122 retry.go:31] will retry after 209.923235ms: waiting for machine to come up
	I0422 11:07:40.493858   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:40.494350   27717 main.go:141] libmachine: (ha-821265-m02) DBG | unable to find current IP address of domain ha-821265-m02 in network mk-ha-821265
	I0422 11:07:40.494385   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:40.494301   28122 retry.go:31] will retry after 252.288683ms: waiting for machine to come up
	I0422 11:07:40.747583   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:40.748156   27717 main.go:141] libmachine: (ha-821265-m02) DBG | unable to find current IP address of domain ha-821265-m02 in network mk-ha-821265
	I0422 11:07:40.748182   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:40.748095   28122 retry.go:31] will retry after 406.145373ms: waiting for machine to come up
	I0422 11:07:41.155279   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:41.155756   27717 main.go:141] libmachine: (ha-821265-m02) DBG | unable to find current IP address of domain ha-821265-m02 in network mk-ha-821265
	I0422 11:07:41.155778   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:41.155721   28122 retry.go:31] will retry after 394.52636ms: waiting for machine to come up
	I0422 11:07:41.552175   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:41.552562   27717 main.go:141] libmachine: (ha-821265-m02) DBG | unable to find current IP address of domain ha-821265-m02 in network mk-ha-821265
	I0422 11:07:41.552592   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:41.552542   28122 retry.go:31] will retry after 573.105029ms: waiting for machine to come up
	I0422 11:07:42.126984   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:42.127466   27717 main.go:141] libmachine: (ha-821265-m02) DBG | unable to find current IP address of domain ha-821265-m02 in network mk-ha-821265
	I0422 11:07:42.127497   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:42.127417   28122 retry.go:31] will retry after 582.958863ms: waiting for machine to come up
	I0422 11:07:42.712332   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:42.712816   27717 main.go:141] libmachine: (ha-821265-m02) DBG | unable to find current IP address of domain ha-821265-m02 in network mk-ha-821265
	I0422 11:07:42.712846   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:42.712764   28122 retry.go:31] will retry after 730.242889ms: waiting for machine to come up
	I0422 11:07:43.444527   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:43.445079   27717 main.go:141] libmachine: (ha-821265-m02) DBG | unable to find current IP address of domain ha-821265-m02 in network mk-ha-821265
	I0422 11:07:43.445111   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:43.445027   28122 retry.go:31] will retry after 1.362127335s: waiting for machine to come up
	I0422 11:07:44.809161   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:44.809551   27717 main.go:141] libmachine: (ha-821265-m02) DBG | unable to find current IP address of domain ha-821265-m02 in network mk-ha-821265
	I0422 11:07:44.809581   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:44.809497   28122 retry.go:31] will retry after 1.496080323s: waiting for machine to come up
	I0422 11:07:46.308152   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:46.308736   27717 main.go:141] libmachine: (ha-821265-m02) DBG | unable to find current IP address of domain ha-821265-m02 in network mk-ha-821265
	I0422 11:07:46.308792   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:46.308665   28122 retry.go:31] will retry after 1.432513378s: waiting for machine to come up
	I0422 11:07:47.743407   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:47.743849   27717 main.go:141] libmachine: (ha-821265-m02) DBG | unable to find current IP address of domain ha-821265-m02 in network mk-ha-821265
	I0422 11:07:47.743880   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:47.743807   28122 retry.go:31] will retry after 2.384548765s: waiting for machine to come up
	I0422 11:07:50.130638   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:50.131138   27717 main.go:141] libmachine: (ha-821265-m02) DBG | unable to find current IP address of domain ha-821265-m02 in network mk-ha-821265
	I0422 11:07:50.131173   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:50.131098   28122 retry.go:31] will retry after 2.477699962s: waiting for machine to come up
	I0422 11:07:52.611732   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:52.612157   27717 main.go:141] libmachine: (ha-821265-m02) DBG | unable to find current IP address of domain ha-821265-m02 in network mk-ha-821265
	I0422 11:07:52.612172   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:52.612123   28122 retry.go:31] will retry after 3.533482498s: waiting for machine to come up
	I0422 11:07:56.147614   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:56.148219   27717 main.go:141] libmachine: (ha-821265-m02) DBG | unable to find current IP address of domain ha-821265-m02 in network mk-ha-821265
	I0422 11:07:56.148245   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:56.148156   28122 retry.go:31] will retry after 3.799865165s: waiting for machine to come up
	I0422 11:07:59.949768   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:59.950217   27717 main.go:141] libmachine: (ha-821265-m02) Found IP for machine: 192.168.39.39
	I0422 11:07:59.950249   27717 main.go:141] libmachine: (ha-821265-m02) Reserving static IP address...
	I0422 11:07:59.950261   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has current primary IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:59.950604   27717 main.go:141] libmachine: (ha-821265-m02) DBG | unable to find host DHCP lease matching {name: "ha-821265-m02", mac: "52:54:00:3b:2d:41", ip: "192.168.39.39"} in network mk-ha-821265
	I0422 11:08:00.024915   27717 main.go:141] libmachine: (ha-821265-m02) Reserved static IP address: 192.168.39.39
	I0422 11:08:00.024946   27717 main.go:141] libmachine: (ha-821265-m02) Waiting for SSH to be available...
	I0422 11:08:00.024957   27717 main.go:141] libmachine: (ha-821265-m02) DBG | Getting to WaitForSSH function...
	I0422 11:08:00.027330   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.027693   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:00.027719   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.027917   27717 main.go:141] libmachine: (ha-821265-m02) DBG | Using SSH client type: external
	I0422 11:08:00.027947   27717 main.go:141] libmachine: (ha-821265-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02/id_rsa (-rw-------)
	I0422 11:08:00.027995   27717 main.go:141] libmachine: (ha-821265-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.39 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 11:08:00.028009   27717 main.go:141] libmachine: (ha-821265-m02) DBG | About to run SSH command:
	I0422 11:08:00.028032   27717 main.go:141] libmachine: (ha-821265-m02) DBG | exit 0
	I0422 11:08:00.148973   27717 main.go:141] libmachine: (ha-821265-m02) DBG | SSH cmd err, output: <nil>: 
	I0422 11:08:00.149269   27717 main.go:141] libmachine: (ha-821265-m02) KVM machine creation complete!
	I0422 11:08:00.149663   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetConfigRaw
	I0422 11:08:00.150197   27717 main.go:141] libmachine: (ha-821265-m02) Calling .DriverName
	I0422 11:08:00.150434   27717 main.go:141] libmachine: (ha-821265-m02) Calling .DriverName
	I0422 11:08:00.150596   27717 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0422 11:08:00.150616   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetState
	I0422 11:08:00.151900   27717 main.go:141] libmachine: Detecting operating system of created instance...
	I0422 11:08:00.151912   27717 main.go:141] libmachine: Waiting for SSH to be available...
	I0422 11:08:00.151920   27717 main.go:141] libmachine: Getting to WaitForSSH function...
	I0422 11:08:00.151927   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHHostname
	I0422 11:08:00.154396   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.154898   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:00.154928   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.155188   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHPort
	I0422 11:08:00.155369   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:08:00.155530   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:08:00.155650   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHUsername
	I0422 11:08:00.155845   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:08:00.156048   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0422 11:08:00.156061   27717 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0422 11:08:00.256261   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 11:08:00.256288   27717 main.go:141] libmachine: Detecting the provisioner...
	I0422 11:08:00.256298   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHHostname
	I0422 11:08:00.259064   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.259471   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:00.259499   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.259666   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHPort
	I0422 11:08:00.259884   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:08:00.260049   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:08:00.260211   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHUsername
	I0422 11:08:00.260385   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:08:00.260534   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0422 11:08:00.260545   27717 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0422 11:08:00.362338   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0422 11:08:00.362410   27717 main.go:141] libmachine: found compatible host: buildroot
	I0422 11:08:00.362420   27717 main.go:141] libmachine: Provisioning with buildroot...
	I0422 11:08:00.362429   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetMachineName
	I0422 11:08:00.362633   27717 buildroot.go:166] provisioning hostname "ha-821265-m02"
	I0422 11:08:00.362652   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetMachineName
	I0422 11:08:00.362824   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHHostname
	I0422 11:08:00.365061   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.365427   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:00.365459   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.365605   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHPort
	I0422 11:08:00.365773   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:08:00.365932   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:08:00.366062   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHUsername
	I0422 11:08:00.366217   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:08:00.366418   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0422 11:08:00.366435   27717 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-821265-m02 && echo "ha-821265-m02" | sudo tee /etc/hostname
	I0422 11:08:00.483472   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-821265-m02
	
	I0422 11:08:00.483501   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHHostname
	I0422 11:08:00.486241   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.486647   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:00.486672   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.486906   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHPort
	I0422 11:08:00.487097   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:08:00.487295   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:08:00.487455   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHUsername
	I0422 11:08:00.487634   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:08:00.487793   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0422 11:08:00.487809   27717 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-821265-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-821265-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-821265-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 11:08:00.599788   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 11:08:00.599822   27717 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18711-7633/.minikube CaCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18711-7633/.minikube}
	I0422 11:08:00.599847   27717 buildroot.go:174] setting up certificates
	I0422 11:08:00.599856   27717 provision.go:84] configureAuth start
	I0422 11:08:00.599866   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetMachineName
	I0422 11:08:00.600165   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetIP
	I0422 11:08:00.602844   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.603226   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:00.603252   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.603396   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHHostname
	I0422 11:08:00.605548   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.605811   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:00.605835   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.605963   27717 provision.go:143] copyHostCerts
	I0422 11:08:00.605994   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem
	I0422 11:08:00.606026   27717 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem, removing ...
	I0422 11:08:00.606035   27717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem
	I0422 11:08:00.606094   27717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem (1078 bytes)
	I0422 11:08:00.606159   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem
	I0422 11:08:00.606175   27717 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem, removing ...
	I0422 11:08:00.606182   27717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem
	I0422 11:08:00.606204   27717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem (1123 bytes)
	I0422 11:08:00.606245   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem
	I0422 11:08:00.606279   27717 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem, removing ...
	I0422 11:08:00.606283   27717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem
	I0422 11:08:00.606303   27717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem (1679 bytes)
	I0422 11:08:00.606348   27717 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem org=jenkins.ha-821265-m02 san=[127.0.0.1 192.168.39.39 ha-821265-m02 localhost minikube]
	I0422 11:08:00.820089   27717 provision.go:177] copyRemoteCerts
	I0422 11:08:00.820141   27717 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 11:08:00.820163   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHHostname
	I0422 11:08:00.823004   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.823324   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:00.823355   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.823557   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHPort
	I0422 11:08:00.823782   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:08:00.823963   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHUsername
	I0422 11:08:00.824108   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02/id_rsa Username:docker}
	I0422 11:08:00.905817   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0422 11:08:00.905890   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 11:08:00.934564   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0422 11:08:00.934660   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0422 11:08:00.963574   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0422 11:08:00.963651   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 11:08:00.992392   27717 provision.go:87] duration metric: took 392.523314ms to configureAuth
	I0422 11:08:00.992423   27717 buildroot.go:189] setting minikube options for container-runtime
	I0422 11:08:00.992633   27717 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:08:00.992738   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHHostname
	I0422 11:08:00.995432   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.995786   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:00.995818   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.995901   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHPort
	I0422 11:08:00.996092   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:08:00.996245   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:08:00.996424   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHUsername
	I0422 11:08:00.996569   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:08:00.996757   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0422 11:08:00.996783   27717 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 11:08:01.292968   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 11:08:01.292997   27717 main.go:141] libmachine: Checking connection to Docker...
	I0422 11:08:01.293008   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetURL
	I0422 11:08:01.294387   27717 main.go:141] libmachine: (ha-821265-m02) DBG | Using libvirt version 6000000
	I0422 11:08:01.296316   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:01.296702   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:01.296733   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:01.296920   27717 main.go:141] libmachine: Docker is up and running!
	I0422 11:08:01.296937   27717 main.go:141] libmachine: Reticulating splines...
	I0422 11:08:01.296943   27717 client.go:171] duration metric: took 22.664863117s to LocalClient.Create
	I0422 11:08:01.296965   27717 start.go:167] duration metric: took 22.664913115s to libmachine.API.Create "ha-821265"
	I0422 11:08:01.296973   27717 start.go:293] postStartSetup for "ha-821265-m02" (driver="kvm2")
	I0422 11:08:01.296985   27717 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 11:08:01.297007   27717 main.go:141] libmachine: (ha-821265-m02) Calling .DriverName
	I0422 11:08:01.297253   27717 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 11:08:01.297286   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHHostname
	I0422 11:08:01.299470   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:01.299782   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:01.299808   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:01.299960   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHPort
	I0422 11:08:01.300123   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:08:01.300252   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHUsername
	I0422 11:08:01.300390   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02/id_rsa Username:docker}
	I0422 11:08:01.382130   27717 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 11:08:01.387634   27717 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 11:08:01.387664   27717 filesync.go:126] Scanning /home/jenkins/minikube-integration/18711-7633/.minikube/addons for local assets ...
	I0422 11:08:01.387739   27717 filesync.go:126] Scanning /home/jenkins/minikube-integration/18711-7633/.minikube/files for local assets ...
	I0422 11:08:01.387826   27717 filesync.go:149] local asset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> 149452.pem in /etc/ssl/certs
	I0422 11:08:01.387843   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> /etc/ssl/certs/149452.pem
	I0422 11:08:01.387947   27717 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 11:08:01.399676   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem --> /etc/ssl/certs/149452.pem (1708 bytes)
	I0422 11:08:01.428041   27717 start.go:296] duration metric: took 131.053549ms for postStartSetup
	I0422 11:08:01.428101   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetConfigRaw
	I0422 11:08:01.428748   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetIP
	I0422 11:08:01.431381   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:01.431796   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:01.431827   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:01.432048   27717 profile.go:143] Saving config to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/config.json ...
	I0422 11:08:01.432302   27717 start.go:128] duration metric: took 22.818178479s to createHost
	I0422 11:08:01.432328   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHHostname
	I0422 11:08:01.434738   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:01.435058   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:01.435081   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:01.435262   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHPort
	I0422 11:08:01.435468   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:08:01.435627   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:08:01.435761   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHUsername
	I0422 11:08:01.435920   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:08:01.436075   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0422 11:08:01.436086   27717 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 11:08:01.534505   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713784081.500955140
	
	I0422 11:08:01.534526   27717 fix.go:216] guest clock: 1713784081.500955140
	I0422 11:08:01.534533   27717 fix.go:229] Guest: 2024-04-22 11:08:01.50095514 +0000 UTC Remote: 2024-04-22 11:08:01.432317327 +0000 UTC m=+84.558643972 (delta=68.637813ms)
	I0422 11:08:01.534547   27717 fix.go:200] guest clock delta is within tolerance: 68.637813ms
	I0422 11:08:01.534552   27717 start.go:83] releasing machines lock for "ha-821265-m02", held for 22.920513101s
	I0422 11:08:01.534568   27717 main.go:141] libmachine: (ha-821265-m02) Calling .DriverName
	I0422 11:08:01.534852   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetIP
	I0422 11:08:01.537488   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:01.537820   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:01.537854   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:01.540532   27717 out.go:177] * Found network options:
	I0422 11:08:01.542100   27717 out.go:177]   - NO_PROXY=192.168.39.150
	W0422 11:08:01.543470   27717 proxy.go:119] fail to check proxy env: Error ip not in block
	I0422 11:08:01.543499   27717 main.go:141] libmachine: (ha-821265-m02) Calling .DriverName
	I0422 11:08:01.544123   27717 main.go:141] libmachine: (ha-821265-m02) Calling .DriverName
	I0422 11:08:01.544335   27717 main.go:141] libmachine: (ha-821265-m02) Calling .DriverName
	I0422 11:08:01.544433   27717 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 11:08:01.544476   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHHostname
	W0422 11:08:01.544571   27717 proxy.go:119] fail to check proxy env: Error ip not in block
	I0422 11:08:01.544644   27717 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 11:08:01.544668   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHHostname
	I0422 11:08:01.547105   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:01.547287   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:01.547479   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:01.547521   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:01.547620   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHPort
	I0422 11:08:01.547752   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:01.547778   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:01.547806   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:08:01.547913   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHPort
	I0422 11:08:01.548035   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHUsername
	I0422 11:08:01.548103   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:08:01.548174   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02/id_rsa Username:docker}
	I0422 11:08:01.548247   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHUsername
	I0422 11:08:01.548374   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02/id_rsa Username:docker}
	I0422 11:08:01.801265   27717 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 11:08:01.808839   27717 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 11:08:01.808903   27717 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 11:08:01.830039   27717 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 11:08:01.830062   27717 start.go:494] detecting cgroup driver to use...
	I0422 11:08:01.830131   27717 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 11:08:01.847745   27717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 11:08:01.864112   27717 docker.go:217] disabling cri-docker service (if available) ...
	I0422 11:08:01.864177   27717 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 11:08:01.881388   27717 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 11:08:01.896992   27717 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 11:08:02.017988   27717 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 11:08:02.153186   27717 docker.go:233] disabling docker service ...
	I0422 11:08:02.153262   27717 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 11:08:02.170314   27717 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 11:08:02.185420   27717 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 11:08:02.334674   27717 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 11:08:02.463413   27717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 11:08:02.481347   27717 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 11:08:02.505117   27717 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 11:08:02.505179   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:08:02.519887   27717 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 11:08:02.519944   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:08:02.537079   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:08:02.550183   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:08:02.562990   27717 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 11:08:02.576044   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:08:02.589791   27717 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:08:02.610609   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:08:02.623991   27717 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 11:08:02.635903   27717 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 11:08:02.635973   27717 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 11:08:02.656318   27717 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 11:08:02.669014   27717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 11:08:02.797820   27717 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 11:08:02.956094   27717 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 11:08:02.956168   27717 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 11:08:02.961822   27717 start.go:562] Will wait 60s for crictl version
	I0422 11:08:02.961880   27717 ssh_runner.go:195] Run: which crictl
	I0422 11:08:02.966471   27717 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 11:08:03.010403   27717 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 11:08:03.010494   27717 ssh_runner.go:195] Run: crio --version
	I0422 11:08:03.041054   27717 ssh_runner.go:195] Run: crio --version
	I0422 11:08:03.074458   27717 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 11:08:03.076219   27717 out.go:177]   - env NO_PROXY=192.168.39.150
	I0422 11:08:03.077542   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetIP
	I0422 11:08:03.079900   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:03.080227   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:03.080266   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:03.080466   27717 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0422 11:08:03.085519   27717 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 11:08:03.100828   27717 mustload.go:65] Loading cluster: ha-821265
	I0422 11:08:03.101095   27717 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:08:03.101347   27717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:08:03.101375   27717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:08:03.115985   27717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39533
	I0422 11:08:03.116441   27717 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:08:03.116913   27717 main.go:141] libmachine: Using API Version  1
	I0422 11:08:03.116956   27717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:08:03.117294   27717 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:08:03.117525   27717 main.go:141] libmachine: (ha-821265) Calling .GetState
	I0422 11:08:03.119157   27717 host.go:66] Checking if "ha-821265" exists ...
	I0422 11:08:03.119429   27717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:08:03.119452   27717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:08:03.133496   27717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32799
	I0422 11:08:03.133891   27717 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:08:03.134278   27717 main.go:141] libmachine: Using API Version  1
	I0422 11:08:03.134297   27717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:08:03.134660   27717 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:08:03.134853   27717 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:08:03.135044   27717 certs.go:68] Setting up /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265 for IP: 192.168.39.39
	I0422 11:08:03.135057   27717 certs.go:194] generating shared ca certs ...
	I0422 11:08:03.135073   27717 certs.go:226] acquiring lock for ca certs: {Name:mk0b77082b88c771d0b00be5267ca31dfee6f85a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:08:03.135180   27717 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key
	I0422 11:08:03.135214   27717 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key
	I0422 11:08:03.135223   27717 certs.go:256] generating profile certs ...
	I0422 11:08:03.135284   27717 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/client.key
	I0422 11:08:03.135305   27717 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key.a296c170
	I0422 11:08:03.135316   27717 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt.a296c170 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.150 192.168.39.39 192.168.39.254]
	I0422 11:08:03.278006   27717 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt.a296c170 ...
	I0422 11:08:03.278033   27717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt.a296c170: {Name:mk6c5e1350c2c2683938acc8747d6aca8f9b695f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:08:03.278219   27717 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key.a296c170 ...
	I0422 11:08:03.278237   27717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key.a296c170: {Name:mkb01e1ae1e9af5af1e53d30f02544be7ca37e5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:08:03.278324   27717 certs.go:381] copying /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt.a296c170 -> /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt
	I0422 11:08:03.278479   27717 certs.go:385] copying /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key.a296c170 -> /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key
	I0422 11:08:03.278636   27717 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.key
	I0422 11:08:03.278655   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0422 11:08:03.278672   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0422 11:08:03.278693   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0422 11:08:03.278711   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0422 11:08:03.278727   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0422 11:08:03.278745   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0422 11:08:03.278763   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0422 11:08:03.278780   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0422 11:08:03.278834   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem (1338 bytes)
	W0422 11:08:03.278872   27717 certs.go:480] ignoring /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945_empty.pem, impossibly tiny 0 bytes
	I0422 11:08:03.278885   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem (1679 bytes)
	I0422 11:08:03.278914   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem (1078 bytes)
	I0422 11:08:03.278950   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem (1123 bytes)
	I0422 11:08:03.278980   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem (1679 bytes)
	I0422 11:08:03.279038   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem (1708 bytes)
	I0422 11:08:03.279072   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:08:03.279091   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem -> /usr/share/ca-certificates/14945.pem
	I0422 11:08:03.279110   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> /usr/share/ca-certificates/149452.pem
	I0422 11:08:03.279149   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:08:03.282344   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:08:03.282743   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:08:03.282766   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:08:03.283013   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:08:03.283213   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:08:03.283375   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:08:03.283539   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:08:03.357320   27717 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0422 11:08:03.363336   27717 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0422 11:08:03.376122   27717 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0422 11:08:03.381121   27717 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0422 11:08:03.392577   27717 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0422 11:08:03.397766   27717 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0422 11:08:03.409097   27717 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0422 11:08:03.414145   27717 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0422 11:08:03.426034   27717 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0422 11:08:03.432534   27717 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0422 11:08:03.445588   27717 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0422 11:08:03.451438   27717 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0422 11:08:03.463931   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 11:08:03.492712   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 11:08:03.522027   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 11:08:03.550733   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0422 11:08:03.578488   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0422 11:08:03.606234   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0422 11:08:03.633142   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 11:08:03.661421   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 11:08:03.689994   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 11:08:03.719276   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem --> /usr/share/ca-certificates/14945.pem (1338 bytes)
	I0422 11:08:03.749292   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem --> /usr/share/ca-certificates/149452.pem (1708 bytes)
	I0422 11:08:03.778394   27717 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0422 11:08:03.797583   27717 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0422 11:08:03.817573   27717 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0422 11:08:03.838503   27717 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0422 11:08:03.857851   27717 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0422 11:08:03.878554   27717 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0422 11:08:03.898946   27717 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0422 11:08:03.918974   27717 ssh_runner.go:195] Run: openssl version
	I0422 11:08:03.925494   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149452.pem && ln -fs /usr/share/ca-certificates/149452.pem /etc/ssl/certs/149452.pem"
	I0422 11:08:03.938792   27717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149452.pem
	I0422 11:08:03.944047   27717 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 10:51 /usr/share/ca-certificates/149452.pem
	I0422 11:08:03.944113   27717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149452.pem
	I0422 11:08:03.950776   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149452.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 11:08:03.963579   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 11:08:03.976266   27717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:08:03.981506   27717 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:08:03.981564   27717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:08:03.988812   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 11:08:04.001732   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14945.pem && ln -fs /usr/share/ca-certificates/14945.pem /etc/ssl/certs/14945.pem"
	I0422 11:08:04.016188   27717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14945.pem
	I0422 11:08:04.021824   27717 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 10:51 /usr/share/ca-certificates/14945.pem
	I0422 11:08:04.021884   27717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14945.pem
	I0422 11:08:04.028182   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14945.pem /etc/ssl/certs/51391683.0"
	I0422 11:08:04.041323   27717 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 11:08:04.046091   27717 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0422 11:08:04.046146   27717 kubeadm.go:928] updating node {m02 192.168.39.39 8443 v1.30.0 crio true true} ...
	I0422 11:08:04.046227   27717 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-821265-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-821265 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 11:08:04.046268   27717 kube-vip.go:111] generating kube-vip config ...
	I0422 11:08:04.046302   27717 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0422 11:08:04.066913   27717 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0422 11:08:04.066976   27717 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0422 11:08:04.067031   27717 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 11:08:04.079006   27717 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0422 11:08:04.079071   27717 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0422 11:08:04.090862   27717 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0422 11:08:04.090893   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0422 11:08:04.090933   27717 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18711-7633/.minikube/cache/linux/amd64/v1.30.0/kubelet
	I0422 11:08:04.090963   27717 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18711-7633/.minikube/cache/linux/amd64/v1.30.0/kubeadm
	I0422 11:08:04.090969   27717 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0422 11:08:04.095975   27717 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0422 11:08:04.096002   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0422 11:08:05.465221   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0422 11:08:05.465294   27717 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0422 11:08:05.471030   27717 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0422 11:08:05.471073   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0422 11:08:05.502872   27717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:08:05.524045   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0422 11:08:05.524147   27717 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0422 11:08:05.543706   27717 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0422 11:08:05.543749   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0422 11:08:06.200090   27717 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0422 11:08:06.211906   27717 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0422 11:08:06.232327   27717 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 11:08:06.252239   27717 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0422 11:08:06.271763   27717 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0422 11:08:06.276972   27717 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 11:08:06.293357   27717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 11:08:06.434830   27717 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 11:08:06.454435   27717 host.go:66] Checking if "ha-821265" exists ...
	I0422 11:08:06.454789   27717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:08:06.454834   27717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:08:06.470668   27717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45259
	I0422 11:08:06.471092   27717 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:08:06.471574   27717 main.go:141] libmachine: Using API Version  1
	I0422 11:08:06.471598   27717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:08:06.471948   27717 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:08:06.472182   27717 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:08:06.472345   27717 start.go:316] joinCluster: &{Name:ha-821265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-821265 Namespace:defau
lt APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 11:08:06.472450   27717 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0422 11:08:06.472466   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:08:06.475406   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:08:06.475811   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:08:06.475845   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:08:06.475963   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:08:06.476141   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:08:06.476304   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:08:06.476443   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:08:06.635304   27717 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 11:08:06.635354   27717 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token z87d26.j7b7qlu8fy64qymo --discovery-token-ca-cert-hash sha256:9af0990320e7d045d6d22d7e7276e7343f7b0ca44ae792f4916b4a79d803646f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-821265-m02 --control-plane --apiserver-advertise-address=192.168.39.39 --apiserver-bind-port=8443"
	I0422 11:08:32.117047   27717 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token z87d26.j7b7qlu8fy64qymo --discovery-token-ca-cert-hash sha256:9af0990320e7d045d6d22d7e7276e7343f7b0ca44ae792f4916b4a79d803646f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-821265-m02 --control-plane --apiserver-advertise-address=192.168.39.39 --apiserver-bind-port=8443": (25.481666773s)
	I0422 11:08:32.117085   27717 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0422 11:08:32.697064   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-821265-m02 minikube.k8s.io/updated_at=2024_04_22T11_08_32_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437 minikube.k8s.io/name=ha-821265 minikube.k8s.io/primary=false
	I0422 11:08:32.865477   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-821265-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0422 11:08:33.004395   27717 start.go:318] duration metric: took 26.532045458s to joinCluster
	I0422 11:08:33.004479   27717 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 11:08:33.006372   27717 out.go:177] * Verifying Kubernetes components...
	I0422 11:08:33.004820   27717 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:08:33.007959   27717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 11:08:33.213054   27717 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 11:08:33.238317   27717 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18711-7633/kubeconfig
	I0422 11:08:33.238542   27717 kapi.go:59] client config for ha-821265: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/client.crt", KeyFile:"/home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/client.key", CAFile:"/home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0422 11:08:33.238605   27717 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.150:8443
	I0422 11:08:33.238788   27717 node_ready.go:35] waiting up to 6m0s for node "ha-821265-m02" to be "Ready" ...
	I0422 11:08:33.238893   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:33.238905   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:33.238915   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:33.238925   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:33.249378   27717 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0422 11:08:33.739962   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:33.739990   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:33.740003   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:33.740013   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:33.747289   27717 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0422 11:08:34.239456   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:34.239482   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:34.239494   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:34.239500   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:34.244012   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:08:34.739569   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:34.739587   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:34.739594   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:34.739599   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:34.743005   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:35.239308   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:35.239338   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:35.239348   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:35.239360   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:35.242933   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:35.243480   27717 node_ready.go:53] node "ha-821265-m02" has status "Ready":"False"
	I0422 11:08:35.739880   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:35.739905   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:35.739916   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:35.739923   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:35.744182   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:08:36.239509   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:36.239532   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:36.239540   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:36.239543   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:36.243219   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:36.739644   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:36.739669   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:36.739677   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:36.739679   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:36.745017   27717 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0422 11:08:37.239990   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:37.240010   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:37.240019   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:37.240022   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:37.243831   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:37.244762   27717 node_ready.go:53] node "ha-821265-m02" has status "Ready":"False"
	I0422 11:08:37.739455   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:37.739483   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:37.739493   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:37.739500   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:37.743417   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:38.239764   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:38.239788   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:38.239796   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:38.239801   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:38.243425   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:38.739666   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:38.739689   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:38.739697   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:38.739704   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:38.743497   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:39.239694   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:39.239726   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:39.239734   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:39.239737   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:39.311498   27717 round_trippers.go:574] Response Status: 200 OK in 71 milliseconds
	I0422 11:08:39.312213   27717 node_ready.go:53] node "ha-821265-m02" has status "Ready":"False"
	I0422 11:08:39.739534   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:39.739561   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:39.739572   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:39.739577   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:39.743236   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:40.239693   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:40.239722   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:40.239731   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:40.239737   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:40.243245   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:40.739236   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:40.739255   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:40.739262   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:40.739267   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:40.743188   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:41.239087   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:41.239109   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:41.239116   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:41.239120   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:41.243241   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:08:41.243875   27717 node_ready.go:49] node "ha-821265-m02" has status "Ready":"True"
	I0422 11:08:41.243892   27717 node_ready.go:38] duration metric: took 8.00507777s for node "ha-821265-m02" to be "Ready" ...
	I0422 11:08:41.243900   27717 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 11:08:41.243996   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0422 11:08:41.244012   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:41.244023   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:41.244031   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:41.250578   27717 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0422 11:08:41.257431   27717 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ft2jl" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:41.257503   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ft2jl
	I0422 11:08:41.257508   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:41.257516   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:41.257519   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:41.261931   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:08:41.263190   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:08:41.263205   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:41.263214   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:41.263221   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:41.266990   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:41.267525   27717 pod_ready.go:92] pod "coredns-7db6d8ff4d-ft2jl" in "kube-system" namespace has status "Ready":"True"
	I0422 11:08:41.267539   27717 pod_ready.go:81] duration metric: took 10.084348ms for pod "coredns-7db6d8ff4d-ft2jl" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:41.267548   27717 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ht7jl" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:41.267594   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ht7jl
	I0422 11:08:41.267601   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:41.267608   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:41.267612   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:41.283136   27717 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0422 11:08:41.283905   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:08:41.283919   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:41.283929   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:41.283937   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:41.287754   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:41.288387   27717 pod_ready.go:92] pod "coredns-7db6d8ff4d-ht7jl" in "kube-system" namespace has status "Ready":"True"
	I0422 11:08:41.288406   27717 pod_ready.go:81] duration metric: took 20.852945ms for pod "coredns-7db6d8ff4d-ht7jl" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:41.288415   27717 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-821265" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:41.288465   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265
	I0422 11:08:41.288472   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:41.288479   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:41.288484   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:41.291524   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:41.292279   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:08:41.292292   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:41.292303   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:41.292309   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:41.295532   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:41.296238   27717 pod_ready.go:92] pod "etcd-ha-821265" in "kube-system" namespace has status "Ready":"True"
	I0422 11:08:41.296258   27717 pod_ready.go:81] duration metric: took 7.834312ms for pod "etcd-ha-821265" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:41.296266   27717 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-821265-m02" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:41.296325   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m02
	I0422 11:08:41.296335   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:41.296343   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:41.296348   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:41.299964   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:41.301465   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:41.301479   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:41.301488   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:41.301493   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:41.304174   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:08:41.797164   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m02
	I0422 11:08:41.797186   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:41.797194   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:41.797214   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:41.801288   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:08:41.802038   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:41.802054   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:41.802061   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:41.802065   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:41.804980   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:08:42.296631   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m02
	I0422 11:08:42.296655   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:42.296663   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:42.296667   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:42.300192   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:42.300858   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:42.300871   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:42.300877   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:42.300881   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:42.303625   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:08:42.797421   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m02
	I0422 11:08:42.797440   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:42.797449   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:42.797452   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:42.806273   27717 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0422 11:08:42.807009   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:42.807027   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:42.807038   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:42.807045   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:42.810179   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:43.297029   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m02
	I0422 11:08:43.297055   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:43.297067   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:43.297073   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:43.300723   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:43.301660   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:43.301674   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:43.301680   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:43.301683   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:43.304321   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:08:43.304880   27717 pod_ready.go:102] pod "etcd-ha-821265-m02" in "kube-system" namespace has status "Ready":"False"
	I0422 11:08:43.796822   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m02
	I0422 11:08:43.796843   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:43.796851   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:43.796855   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:43.800291   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:43.800956   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:43.800973   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:43.800983   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:43.800988   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:43.803395   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:08:44.297325   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m02
	I0422 11:08:44.297352   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:44.297363   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:44.297369   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:44.301457   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:08:44.302129   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:44.302144   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:44.302152   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:44.302158   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:44.305097   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:08:44.797092   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m02
	I0422 11:08:44.797114   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:44.797121   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:44.797125   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:44.800678   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:44.801608   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:44.801623   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:44.801629   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:44.801632   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:44.804656   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:45.296805   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m02
	I0422 11:08:45.296830   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:45.296839   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:45.296844   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:45.301027   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:08:45.301971   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:45.301988   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:45.301995   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:45.301998   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:45.305320   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:45.305908   27717 pod_ready.go:102] pod "etcd-ha-821265-m02" in "kube-system" namespace has status "Ready":"False"
	I0422 11:08:45.797336   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m02
	I0422 11:08:45.797361   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:45.797372   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:45.797379   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:45.801701   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:08:45.802294   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:45.802311   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:45.802317   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:45.802323   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:45.805667   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:45.806539   27717 pod_ready.go:92] pod "etcd-ha-821265-m02" in "kube-system" namespace has status "Ready":"True"
	I0422 11:08:45.806561   27717 pod_ready.go:81] duration metric: took 4.510288487s for pod "etcd-ha-821265-m02" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:45.806580   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-821265" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:45.806649   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-821265
	I0422 11:08:45.806660   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:45.806671   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:45.806681   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:45.810462   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:45.811135   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:08:45.811149   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:45.811156   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:45.811160   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:45.814298   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:45.818487   27717 pod_ready.go:92] pod "kube-apiserver-ha-821265" in "kube-system" namespace has status "Ready":"True"
	I0422 11:08:45.818505   27717 pod_ready.go:81] duration metric: took 11.913247ms for pod "kube-apiserver-ha-821265" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:45.818514   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-821265-m02" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:45.818578   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-821265-m02
	I0422 11:08:45.818588   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:45.818596   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:45.818600   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:45.822562   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:45.823295   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:45.823307   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:45.823314   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:45.823318   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:45.828332   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:08:46.318942   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-821265-m02
	I0422 11:08:46.318962   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:46.318970   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:46.318977   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:46.322350   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:46.323120   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:46.323134   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:46.323142   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:46.323146   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:46.325549   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:08:46.819125   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-821265-m02
	I0422 11:08:46.819144   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:46.819152   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:46.819155   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:46.823840   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:08:46.824766   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:46.824801   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:46.824813   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:46.824818   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:46.828927   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:08:47.318760   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-821265-m02
	I0422 11:08:47.318788   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:47.318799   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:47.318805   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:47.322364   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:47.323127   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:47.323145   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:47.323152   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:47.323158   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:47.326356   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:47.819584   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-821265-m02
	I0422 11:08:47.819607   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:47.819615   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:47.819619   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:47.823835   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:08:47.824816   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:47.824833   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:47.824841   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:47.824845   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:47.827487   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:08:47.828054   27717 pod_ready.go:102] pod "kube-apiserver-ha-821265-m02" in "kube-system" namespace has status "Ready":"False"
	I0422 11:08:48.319273   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-821265-m02
	I0422 11:08:48.319298   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:48.319317   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:48.319325   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:48.323004   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:48.324098   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:48.324115   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:48.324125   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:48.324130   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:48.327545   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:48.818809   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-821265-m02
	I0422 11:08:48.818832   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:48.818839   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:48.818842   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:48.822692   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:48.823429   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:48.823452   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:48.823461   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:48.823466   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:48.826653   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:49.319641   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-821265-m02
	I0422 11:08:49.319671   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:49.319682   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:49.319686   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:49.323515   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:49.324375   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:49.324394   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:49.324405   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:49.324410   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:49.327996   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:49.328647   27717 pod_ready.go:92] pod "kube-apiserver-ha-821265-m02" in "kube-system" namespace has status "Ready":"True"
	I0422 11:08:49.328667   27717 pod_ready.go:81] duration metric: took 3.510146972s for pod "kube-apiserver-ha-821265-m02" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:49.328677   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-821265" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:49.328737   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-821265
	I0422 11:08:49.328741   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:49.328748   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:49.328752   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:49.331952   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:49.332891   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:08:49.332908   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:49.332916   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:49.332920   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:49.335564   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:08:49.336180   27717 pod_ready.go:92] pod "kube-controller-manager-ha-821265" in "kube-system" namespace has status "Ready":"True"
	I0422 11:08:49.336208   27717 pod_ready.go:81] duration metric: took 7.523243ms for pod "kube-controller-manager-ha-821265" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:49.336222   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-821265-m02" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:49.336291   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-821265-m02
	I0422 11:08:49.336304   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:49.336313   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:49.336318   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:49.339488   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:49.340156   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:49.340172   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:49.340179   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:49.340183   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:49.343204   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:49.343915   27717 pod_ready.go:92] pod "kube-controller-manager-ha-821265-m02" in "kube-system" namespace has status "Ready":"True"
	I0422 11:08:49.343936   27717 pod_ready.go:81] duration metric: took 7.706743ms for pod "kube-controller-manager-ha-821265-m02" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:49.343946   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j2hpk" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:49.439213   27717 request.go:629] Waited for 95.204097ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j2hpk
	I0422 11:08:49.439299   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j2hpk
	I0422 11:08:49.439312   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:49.439322   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:49.439332   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:49.443409   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:08:49.639154   27717 request.go:629] Waited for 194.343471ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:49.639214   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:49.639220   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:49.639228   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:49.639231   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:49.643437   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:08:49.644072   27717 pod_ready.go:92] pod "kube-proxy-j2hpk" in "kube-system" namespace has status "Ready":"True"
	I0422 11:08:49.644094   27717 pod_ready.go:81] duration metric: took 300.14016ms for pod "kube-proxy-j2hpk" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:49.644108   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w7r9d" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:49.839545   27717 request.go:629] Waited for 195.375525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w7r9d
	I0422 11:08:49.839617   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w7r9d
	I0422 11:08:49.839623   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:49.839630   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:49.839634   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:49.843443   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:50.039830   27717 request.go:629] Waited for 195.191671ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:08:50.039924   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:08:50.039934   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:50.039946   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:50.039958   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:50.043198   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:50.044101   27717 pod_ready.go:92] pod "kube-proxy-w7r9d" in "kube-system" namespace has status "Ready":"True"
	I0422 11:08:50.044119   27717 pod_ready.go:81] duration metric: took 400.00501ms for pod "kube-proxy-w7r9d" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:50.044128   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-821265" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:50.239411   27717 request.go:629] Waited for 195.20436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-821265
	I0422 11:08:50.239481   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-821265
	I0422 11:08:50.239492   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:50.239501   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:50.239510   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:50.243228   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:50.439633   27717 request.go:629] Waited for 195.390191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:08:50.439708   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:08:50.439717   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:50.439725   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:50.439734   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:50.444259   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:08:50.444881   27717 pod_ready.go:92] pod "kube-scheduler-ha-821265" in "kube-system" namespace has status "Ready":"True"
	I0422 11:08:50.444900   27717 pod_ready.go:81] duration metric: took 400.765645ms for pod "kube-scheduler-ha-821265" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:50.444909   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-821265-m02" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:50.639902   27717 request.go:629] Waited for 194.938684ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-821265-m02
	I0422 11:08:50.639970   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-821265-m02
	I0422 11:08:50.639976   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:50.639987   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:50.639998   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:50.643883   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:50.840108   27717 request.go:629] Waited for 195.435349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:50.840212   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:50.840231   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:50.840242   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:50.840250   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:50.843620   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:50.845161   27717 pod_ready.go:92] pod "kube-scheduler-ha-821265-m02" in "kube-system" namespace has status "Ready":"True"
	I0422 11:08:50.845179   27717 pod_ready.go:81] duration metric: took 400.263918ms for pod "kube-scheduler-ha-821265-m02" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:50.845190   27717 pod_ready.go:38] duration metric: took 9.601243901s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 11:08:50.845203   27717 api_server.go:52] waiting for apiserver process to appear ...
	I0422 11:08:50.845258   27717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 11:08:50.862092   27717 api_server.go:72] duration metric: took 17.857570443s to wait for apiserver process to appear ...
	I0422 11:08:50.862115   27717 api_server.go:88] waiting for apiserver healthz status ...
	I0422 11:08:50.862131   27717 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I0422 11:08:50.868932   27717 api_server.go:279] https://192.168.39.150:8443/healthz returned 200:
	ok
	I0422 11:08:50.869006   27717 round_trippers.go:463] GET https://192.168.39.150:8443/version
	I0422 11:08:50.869018   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:50.869028   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:50.869035   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:50.869991   27717 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0422 11:08:50.870124   27717 api_server.go:141] control plane version: v1.30.0
	I0422 11:08:50.870142   27717 api_server.go:131] duration metric: took 8.020804ms to wait for apiserver health ...
	I0422 11:08:50.870151   27717 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 11:08:51.039531   27717 request.go:629] Waited for 169.318698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0422 11:08:51.039579   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0422 11:08:51.039586   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:51.039593   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:51.039598   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:51.046315   27717 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0422 11:08:51.052196   27717 system_pods.go:59] 17 kube-system pods found
	I0422 11:08:51.052232   27717 system_pods.go:61] "coredns-7db6d8ff4d-ft2jl" [09e14815-b8e9-4b60-9b2c-c7d86cccb594] Running
	I0422 11:08:51.052237   27717 system_pods.go:61] "coredns-7db6d8ff4d-ht7jl" [c404a830-ddce-4c49-9e54-05d45871b4b0] Running
	I0422 11:08:51.052240   27717 system_pods.go:61] "etcd-ha-821265" [1a27ab5d-19af-49d9-8eb3-e50b7e2225a5] Running
	I0422 11:08:51.052243   27717 system_pods.go:61] "etcd-ha-821265-m02" [4ba0de26-81d6-423b-a5a4-9fd88c90ebdc] Running
	I0422 11:08:51.052246   27717 system_pods.go:61] "kindnet-jm2pd" [0550a9db-b106-4ac4-9976-118d80927509] Running
	I0422 11:08:51.052249   27717 system_pods.go:61] "kindnet-qbq9z" [9751a17f-e26b-4ba8-81ce-077103c0aa1c] Running
	I0422 11:08:51.052252   27717 system_pods.go:61] "kube-apiserver-ha-821265" [1e20fb49-c54d-49fd-900b-38e347a52f9a] Running
	I0422 11:08:51.052254   27717 system_pods.go:61] "kube-apiserver-ha-821265-m02" [95616042-7a05-4fc3-a1ef-7fd56c8b3cd8] Running
	I0422 11:08:51.052258   27717 system_pods.go:61] "kube-controller-manager-ha-821265" [51933fc1-af7c-4fb0-b811-b6312f4b4d29] Running
	I0422 11:08:51.052260   27717 system_pods.go:61] "kube-controller-manager-ha-821265-m02" [4af2c432-4c7c-4f1f-98da-34af2648d7db] Running
	I0422 11:08:51.052263   27717 system_pods.go:61] "kube-proxy-j2hpk" [3ebf4ab0-bc76-4f5c-916e-6b28a81dc031] Running
	I0422 11:08:51.052266   27717 system_pods.go:61] "kube-proxy-w7r9d" [56a4f7fc-5ce0-4d77-b30f-9d39cded457c] Running
	I0422 11:08:51.052269   27717 system_pods.go:61] "kube-scheduler-ha-821265" [929e0c00-c49a-4b96-8f6a-7a84ae4f117c] Running
	I0422 11:08:51.052272   27717 system_pods.go:61] "kube-scheduler-ha-821265-m02" [589c30c7-d9df-4745-bdb3-87ae02ab2b67] Running
	I0422 11:08:51.052274   27717 system_pods.go:61] "kube-vip-ha-821265" [9322f0ee-9e3e-4585-9388-44ccd1417371] Running
	I0422 11:08:51.052277   27717 system_pods.go:61] "kube-vip-ha-821265-m02" [466697de-7dbe-4e6c-be95-9463a9548cde] Running
	I0422 11:08:51.052280   27717 system_pods.go:61] "storage-provisioner" [4b44da93-f3fa-49b7-a701-5ab7a430374f] Running
	I0422 11:08:51.052285   27717 system_pods.go:74] duration metric: took 182.128313ms to wait for pod list to return data ...
	I0422 11:08:51.052292   27717 default_sa.go:34] waiting for default service account to be created ...
	I0422 11:08:51.239721   27717 request.go:629] Waited for 187.364826ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/default/serviceaccounts
	I0422 11:08:51.239797   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/default/serviceaccounts
	I0422 11:08:51.239811   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:51.239821   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:51.239829   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:51.243700   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:51.243954   27717 default_sa.go:45] found service account: "default"
	I0422 11:08:51.243974   27717 default_sa.go:55] duration metric: took 191.676706ms for default service account to be created ...
	I0422 11:08:51.243982   27717 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 11:08:51.439120   27717 request.go:629] Waited for 195.06203ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0422 11:08:51.439184   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0422 11:08:51.439190   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:51.439197   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:51.439201   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:51.445273   27717 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0422 11:08:51.450063   27717 system_pods.go:86] 17 kube-system pods found
	I0422 11:08:51.450088   27717 system_pods.go:89] "coredns-7db6d8ff4d-ft2jl" [09e14815-b8e9-4b60-9b2c-c7d86cccb594] Running
	I0422 11:08:51.450094   27717 system_pods.go:89] "coredns-7db6d8ff4d-ht7jl" [c404a830-ddce-4c49-9e54-05d45871b4b0] Running
	I0422 11:08:51.450098   27717 system_pods.go:89] "etcd-ha-821265" [1a27ab5d-19af-49d9-8eb3-e50b7e2225a5] Running
	I0422 11:08:51.450103   27717 system_pods.go:89] "etcd-ha-821265-m02" [4ba0de26-81d6-423b-a5a4-9fd88c90ebdc] Running
	I0422 11:08:51.450107   27717 system_pods.go:89] "kindnet-jm2pd" [0550a9db-b106-4ac4-9976-118d80927509] Running
	I0422 11:08:51.450111   27717 system_pods.go:89] "kindnet-qbq9z" [9751a17f-e26b-4ba8-81ce-077103c0aa1c] Running
	I0422 11:08:51.450115   27717 system_pods.go:89] "kube-apiserver-ha-821265" [1e20fb49-c54d-49fd-900b-38e347a52f9a] Running
	I0422 11:08:51.450119   27717 system_pods.go:89] "kube-apiserver-ha-821265-m02" [95616042-7a05-4fc3-a1ef-7fd56c8b3cd8] Running
	I0422 11:08:51.450123   27717 system_pods.go:89] "kube-controller-manager-ha-821265" [51933fc1-af7c-4fb0-b811-b6312f4b4d29] Running
	I0422 11:08:51.450130   27717 system_pods.go:89] "kube-controller-manager-ha-821265-m02" [4af2c432-4c7c-4f1f-98da-34af2648d7db] Running
	I0422 11:08:51.450134   27717 system_pods.go:89] "kube-proxy-j2hpk" [3ebf4ab0-bc76-4f5c-916e-6b28a81dc031] Running
	I0422 11:08:51.450141   27717 system_pods.go:89] "kube-proxy-w7r9d" [56a4f7fc-5ce0-4d77-b30f-9d39cded457c] Running
	I0422 11:08:51.450145   27717 system_pods.go:89] "kube-scheduler-ha-821265" [929e0c00-c49a-4b96-8f6a-7a84ae4f117c] Running
	I0422 11:08:51.450151   27717 system_pods.go:89] "kube-scheduler-ha-821265-m02" [589c30c7-d9df-4745-bdb3-87ae02ab2b67] Running
	I0422 11:08:51.450155   27717 system_pods.go:89] "kube-vip-ha-821265" [9322f0ee-9e3e-4585-9388-44ccd1417371] Running
	I0422 11:08:51.450162   27717 system_pods.go:89] "kube-vip-ha-821265-m02" [466697de-7dbe-4e6c-be95-9463a9548cde] Running
	I0422 11:08:51.450167   27717 system_pods.go:89] "storage-provisioner" [4b44da93-f3fa-49b7-a701-5ab7a430374f] Running
	I0422 11:08:51.450176   27717 system_pods.go:126] duration metric: took 206.186469ms to wait for k8s-apps to be running ...
	I0422 11:08:51.450184   27717 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 11:08:51.450235   27717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:08:51.466354   27717 system_svc.go:56] duration metric: took 16.160874ms WaitForService to wait for kubelet
	I0422 11:08:51.466383   27717 kubeadm.go:576] duration metric: took 18.461863443s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 11:08:51.466405   27717 node_conditions.go:102] verifying NodePressure condition ...
	I0422 11:08:51.640057   27717 request.go:629] Waited for 173.571533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes
	I0422 11:08:51.640104   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes
	I0422 11:08:51.640109   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:51.640116   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:51.640119   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:51.645262   27717 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0422 11:08:51.646980   27717 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 11:08:51.647003   27717 node_conditions.go:123] node cpu capacity is 2
	I0422 11:08:51.647016   27717 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 11:08:51.647021   27717 node_conditions.go:123] node cpu capacity is 2
	I0422 11:08:51.647026   27717 node_conditions.go:105] duration metric: took 180.615876ms to run NodePressure ...
	I0422 11:08:51.647041   27717 start.go:240] waiting for startup goroutines ...
	I0422 11:08:51.647076   27717 start.go:254] writing updated cluster config ...
	I0422 11:08:51.649362   27717 out.go:177] 
	I0422 11:08:51.651069   27717 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:08:51.651185   27717 profile.go:143] Saving config to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/config.json ...
	I0422 11:08:51.652866   27717 out.go:177] * Starting "ha-821265-m03" control-plane node in "ha-821265" cluster
	I0422 11:08:51.654285   27717 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 11:08:51.654313   27717 cache.go:56] Caching tarball of preloaded images
	I0422 11:08:51.654406   27717 preload.go:173] Found /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 11:08:51.654419   27717 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 11:08:51.654510   27717 profile.go:143] Saving config to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/config.json ...
	I0422 11:08:51.654680   27717 start.go:360] acquireMachinesLock for ha-821265-m03: {Name:mk5cb9b294e703b264c1f97ac968ffd01e93b576 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 11:08:51.654736   27717 start.go:364] duration metric: took 34.256µs to acquireMachinesLock for "ha-821265-m03"
	I0422 11:08:51.654762   27717 start.go:93] Provisioning new machine with config: &{Name:ha-821265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:h
a-821265 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fals
e istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 11:08:51.654873   27717 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0422 11:08:51.656529   27717 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0422 11:08:51.656614   27717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:08:51.656648   27717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:08:51.671283   27717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41653
	I0422 11:08:51.671735   27717 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:08:51.672165   27717 main.go:141] libmachine: Using API Version  1
	I0422 11:08:51.672182   27717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:08:51.672539   27717 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:08:51.672749   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetMachineName
	I0422 11:08:51.672936   27717 main.go:141] libmachine: (ha-821265-m03) Calling .DriverName
	I0422 11:08:51.673119   27717 start.go:159] libmachine.API.Create for "ha-821265" (driver="kvm2")
	I0422 11:08:51.673147   27717 client.go:168] LocalClient.Create starting
	I0422 11:08:51.673180   27717 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem
	I0422 11:08:51.673219   27717 main.go:141] libmachine: Decoding PEM data...
	I0422 11:08:51.673235   27717 main.go:141] libmachine: Parsing certificate...
	I0422 11:08:51.673297   27717 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem
	I0422 11:08:51.673318   27717 main.go:141] libmachine: Decoding PEM data...
	I0422 11:08:51.673334   27717 main.go:141] libmachine: Parsing certificate...
	I0422 11:08:51.673359   27717 main.go:141] libmachine: Running pre-create checks...
	I0422 11:08:51.673370   27717 main.go:141] libmachine: (ha-821265-m03) Calling .PreCreateCheck
	I0422 11:08:51.673549   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetConfigRaw
	I0422 11:08:51.673962   27717 main.go:141] libmachine: Creating machine...
	I0422 11:08:51.673978   27717 main.go:141] libmachine: (ha-821265-m03) Calling .Create
	I0422 11:08:51.674114   27717 main.go:141] libmachine: (ha-821265-m03) Creating KVM machine...
	I0422 11:08:51.675559   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found existing default KVM network
	I0422 11:08:51.675687   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found existing private KVM network mk-ha-821265
	I0422 11:08:51.675828   27717 main.go:141] libmachine: (ha-821265-m03) Setting up store path in /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03 ...
	I0422 11:08:51.675851   27717 main.go:141] libmachine: (ha-821265-m03) Building disk image from file:///home/jenkins/minikube-integration/18711-7633/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0422 11:08:51.675925   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:08:51.675807   28516 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 11:08:51.675993   27717 main.go:141] libmachine: (ha-821265-m03) Downloading /home/jenkins/minikube-integration/18711-7633/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18711-7633/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0422 11:08:51.886984   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:08:51.886854   28516 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03/id_rsa...
	I0422 11:08:52.024651   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:08:52.024529   28516 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03/ha-821265-m03.rawdisk...
	I0422 11:08:52.024687   27717 main.go:141] libmachine: (ha-821265-m03) DBG | Writing magic tar header
	I0422 11:08:52.024703   27717 main.go:141] libmachine: (ha-821265-m03) DBG | Writing SSH key tar header
	I0422 11:08:52.024721   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:08:52.024685   28516 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03 ...
	I0422 11:08:52.024903   27717 main.go:141] libmachine: (ha-821265-m03) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03 (perms=drwx------)
	I0422 11:08:52.024924   27717 main.go:141] libmachine: (ha-821265-m03) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633/.minikube/machines (perms=drwxr-xr-x)
	I0422 11:08:52.024934   27717 main.go:141] libmachine: (ha-821265-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03
	I0422 11:08:52.024944   27717 main.go:141] libmachine: (ha-821265-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633/.minikube/machines
	I0422 11:08:52.024955   27717 main.go:141] libmachine: (ha-821265-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 11:08:52.024963   27717 main.go:141] libmachine: (ha-821265-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633
	I0422 11:08:52.024978   27717 main.go:141] libmachine: (ha-821265-m03) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633/.minikube (perms=drwxr-xr-x)
	I0422 11:08:52.024987   27717 main.go:141] libmachine: (ha-821265-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0422 11:08:52.024995   27717 main.go:141] libmachine: (ha-821265-m03) DBG | Checking permissions on dir: /home/jenkins
	I0422 11:08:52.025003   27717 main.go:141] libmachine: (ha-821265-m03) DBG | Checking permissions on dir: /home
	I0422 11:08:52.025012   27717 main.go:141] libmachine: (ha-821265-m03) DBG | Skipping /home - not owner
	I0422 11:08:52.025024   27717 main.go:141] libmachine: (ha-821265-m03) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633 (perms=drwxrwxr-x)
	I0422 11:08:52.025033   27717 main.go:141] libmachine: (ha-821265-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0422 11:08:52.025042   27717 main.go:141] libmachine: (ha-821265-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0422 11:08:52.025050   27717 main.go:141] libmachine: (ha-821265-m03) Creating domain...
	I0422 11:08:52.025930   27717 main.go:141] libmachine: (ha-821265-m03) define libvirt domain using xml: 
	I0422 11:08:52.025952   27717 main.go:141] libmachine: (ha-821265-m03) <domain type='kvm'>
	I0422 11:08:52.025962   27717 main.go:141] libmachine: (ha-821265-m03)   <name>ha-821265-m03</name>
	I0422 11:08:52.025970   27717 main.go:141] libmachine: (ha-821265-m03)   <memory unit='MiB'>2200</memory>
	I0422 11:08:52.025979   27717 main.go:141] libmachine: (ha-821265-m03)   <vcpu>2</vcpu>
	I0422 11:08:52.025990   27717 main.go:141] libmachine: (ha-821265-m03)   <features>
	I0422 11:08:52.025999   27717 main.go:141] libmachine: (ha-821265-m03)     <acpi/>
	I0422 11:08:52.026010   27717 main.go:141] libmachine: (ha-821265-m03)     <apic/>
	I0422 11:08:52.026029   27717 main.go:141] libmachine: (ha-821265-m03)     <pae/>
	I0422 11:08:52.026044   27717 main.go:141] libmachine: (ha-821265-m03)     
	I0422 11:08:52.026056   27717 main.go:141] libmachine: (ha-821265-m03)   </features>
	I0422 11:08:52.026067   27717 main.go:141] libmachine: (ha-821265-m03)   <cpu mode='host-passthrough'>
	I0422 11:08:52.026078   27717 main.go:141] libmachine: (ha-821265-m03)   
	I0422 11:08:52.026088   27717 main.go:141] libmachine: (ha-821265-m03)   </cpu>
	I0422 11:08:52.026098   27717 main.go:141] libmachine: (ha-821265-m03)   <os>
	I0422 11:08:52.026113   27717 main.go:141] libmachine: (ha-821265-m03)     <type>hvm</type>
	I0422 11:08:52.026126   27717 main.go:141] libmachine: (ha-821265-m03)     <boot dev='cdrom'/>
	I0422 11:08:52.026137   27717 main.go:141] libmachine: (ha-821265-m03)     <boot dev='hd'/>
	I0422 11:08:52.026147   27717 main.go:141] libmachine: (ha-821265-m03)     <bootmenu enable='no'/>
	I0422 11:08:52.026157   27717 main.go:141] libmachine: (ha-821265-m03)   </os>
	I0422 11:08:52.026166   27717 main.go:141] libmachine: (ha-821265-m03)   <devices>
	I0422 11:08:52.026182   27717 main.go:141] libmachine: (ha-821265-m03)     <disk type='file' device='cdrom'>
	I0422 11:08:52.026200   27717 main.go:141] libmachine: (ha-821265-m03)       <source file='/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03/boot2docker.iso'/>
	I0422 11:08:52.026212   27717 main.go:141] libmachine: (ha-821265-m03)       <target dev='hdc' bus='scsi'/>
	I0422 11:08:52.026222   27717 main.go:141] libmachine: (ha-821265-m03)       <readonly/>
	I0422 11:08:52.026231   27717 main.go:141] libmachine: (ha-821265-m03)     </disk>
	I0422 11:08:52.026244   27717 main.go:141] libmachine: (ha-821265-m03)     <disk type='file' device='disk'>
	I0422 11:08:52.026261   27717 main.go:141] libmachine: (ha-821265-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0422 11:08:52.026278   27717 main.go:141] libmachine: (ha-821265-m03)       <source file='/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03/ha-821265-m03.rawdisk'/>
	I0422 11:08:52.026289   27717 main.go:141] libmachine: (ha-821265-m03)       <target dev='hda' bus='virtio'/>
	I0422 11:08:52.026302   27717 main.go:141] libmachine: (ha-821265-m03)     </disk>
	I0422 11:08:52.026313   27717 main.go:141] libmachine: (ha-821265-m03)     <interface type='network'>
	I0422 11:08:52.026338   27717 main.go:141] libmachine: (ha-821265-m03)       <source network='mk-ha-821265'/>
	I0422 11:08:52.026354   27717 main.go:141] libmachine: (ha-821265-m03)       <model type='virtio'/>
	I0422 11:08:52.026362   27717 main.go:141] libmachine: (ha-821265-m03)     </interface>
	I0422 11:08:52.026375   27717 main.go:141] libmachine: (ha-821265-m03)     <interface type='network'>
	I0422 11:08:52.026385   27717 main.go:141] libmachine: (ha-821265-m03)       <source network='default'/>
	I0422 11:08:52.026393   27717 main.go:141] libmachine: (ha-821265-m03)       <model type='virtio'/>
	I0422 11:08:52.026400   27717 main.go:141] libmachine: (ha-821265-m03)     </interface>
	I0422 11:08:52.026408   27717 main.go:141] libmachine: (ha-821265-m03)     <serial type='pty'>
	I0422 11:08:52.026414   27717 main.go:141] libmachine: (ha-821265-m03)       <target port='0'/>
	I0422 11:08:52.026421   27717 main.go:141] libmachine: (ha-821265-m03)     </serial>
	I0422 11:08:52.026428   27717 main.go:141] libmachine: (ha-821265-m03)     <console type='pty'>
	I0422 11:08:52.026436   27717 main.go:141] libmachine: (ha-821265-m03)       <target type='serial' port='0'/>
	I0422 11:08:52.026469   27717 main.go:141] libmachine: (ha-821265-m03)     </console>
	I0422 11:08:52.026493   27717 main.go:141] libmachine: (ha-821265-m03)     <rng model='virtio'>
	I0422 11:08:52.026509   27717 main.go:141] libmachine: (ha-821265-m03)       <backend model='random'>/dev/random</backend>
	I0422 11:08:52.026518   27717 main.go:141] libmachine: (ha-821265-m03)     </rng>
	I0422 11:08:52.026527   27717 main.go:141] libmachine: (ha-821265-m03)     
	I0422 11:08:52.026535   27717 main.go:141] libmachine: (ha-821265-m03)     
	I0422 11:08:52.026543   27717 main.go:141] libmachine: (ha-821265-m03)   </devices>
	I0422 11:08:52.026554   27717 main.go:141] libmachine: (ha-821265-m03) </domain>
	I0422 11:08:52.026562   27717 main.go:141] libmachine: (ha-821265-m03) 
	I0422 11:08:52.033440   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:42:52:d4 in network default
	I0422 11:08:52.033919   27717 main.go:141] libmachine: (ha-821265-m03) Ensuring networks are active...
	I0422 11:08:52.033939   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:08:52.034637   27717 main.go:141] libmachine: (ha-821265-m03) Ensuring network default is active
	I0422 11:08:52.034969   27717 main.go:141] libmachine: (ha-821265-m03) Ensuring network mk-ha-821265 is active
	I0422 11:08:52.035313   27717 main.go:141] libmachine: (ha-821265-m03) Getting domain xml...
	I0422 11:08:52.036058   27717 main.go:141] libmachine: (ha-821265-m03) Creating domain...
	I0422 11:08:53.244492   27717 main.go:141] libmachine: (ha-821265-m03) Waiting to get IP...
	I0422 11:08:53.245385   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:08:53.245793   27717 main.go:141] libmachine: (ha-821265-m03) DBG | unable to find current IP address of domain ha-821265-m03 in network mk-ha-821265
	I0422 11:08:53.245819   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:08:53.245787   28516 retry.go:31] will retry after 234.374116ms: waiting for machine to come up
	I0422 11:08:53.482189   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:08:53.482648   27717 main.go:141] libmachine: (ha-821265-m03) DBG | unable to find current IP address of domain ha-821265-m03 in network mk-ha-821265
	I0422 11:08:53.482685   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:08:53.482606   28516 retry.go:31] will retry after 381.567774ms: waiting for machine to come up
	I0422 11:08:53.866209   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:08:53.866689   27717 main.go:141] libmachine: (ha-821265-m03) DBG | unable to find current IP address of domain ha-821265-m03 in network mk-ha-821265
	I0422 11:08:53.866720   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:08:53.866656   28516 retry.go:31] will retry after 479.573791ms: waiting for machine to come up
	I0422 11:08:54.347782   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:08:54.348239   27717 main.go:141] libmachine: (ha-821265-m03) DBG | unable to find current IP address of domain ha-821265-m03 in network mk-ha-821265
	I0422 11:08:54.348260   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:08:54.348185   28516 retry.go:31] will retry after 396.163013ms: waiting for machine to come up
	I0422 11:08:54.745906   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:08:54.746940   27717 main.go:141] libmachine: (ha-821265-m03) DBG | unable to find current IP address of domain ha-821265-m03 in network mk-ha-821265
	I0422 11:08:54.747002   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:08:54.746880   28516 retry.go:31] will retry after 604.728808ms: waiting for machine to come up
	I0422 11:08:55.352872   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:08:55.353362   27717 main.go:141] libmachine: (ha-821265-m03) DBG | unable to find current IP address of domain ha-821265-m03 in network mk-ha-821265
	I0422 11:08:55.353396   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:08:55.353311   28516 retry.go:31] will retry after 577.189213ms: waiting for machine to come up
	I0422 11:08:55.931772   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:08:55.932234   27717 main.go:141] libmachine: (ha-821265-m03) DBG | unable to find current IP address of domain ha-821265-m03 in network mk-ha-821265
	I0422 11:08:55.932268   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:08:55.932166   28516 retry.go:31] will retry after 1.115081687s: waiting for machine to come up
	I0422 11:08:57.050105   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:08:57.050983   27717 main.go:141] libmachine: (ha-821265-m03) DBG | unable to find current IP address of domain ha-821265-m03 in network mk-ha-821265
	I0422 11:08:57.051025   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:08:57.050956   28516 retry.go:31] will retry after 944.628006ms: waiting for machine to come up
	I0422 11:08:57.996698   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:08:57.997154   27717 main.go:141] libmachine: (ha-821265-m03) DBG | unable to find current IP address of domain ha-821265-m03 in network mk-ha-821265
	I0422 11:08:57.997179   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:08:57.997109   28516 retry.go:31] will retry after 1.130350135s: waiting for machine to come up
	I0422 11:08:59.129494   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:08:59.130069   27717 main.go:141] libmachine: (ha-821265-m03) DBG | unable to find current IP address of domain ha-821265-m03 in network mk-ha-821265
	I0422 11:08:59.130089   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:08:59.130031   28516 retry.go:31] will retry after 1.837856027s: waiting for machine to come up
	I0422 11:09:00.969944   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:00.970400   27717 main.go:141] libmachine: (ha-821265-m03) DBG | unable to find current IP address of domain ha-821265-m03 in network mk-ha-821265
	I0422 11:09:00.970424   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:09:00.970372   28516 retry.go:31] will retry after 1.911594615s: waiting for machine to come up
	I0422 11:09:02.884148   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:02.884548   27717 main.go:141] libmachine: (ha-821265-m03) DBG | unable to find current IP address of domain ha-821265-m03 in network mk-ha-821265
	I0422 11:09:02.884588   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:09:02.884543   28516 retry.go:31] will retry after 3.346493159s: waiting for machine to come up
	I0422 11:09:06.233823   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:06.234193   27717 main.go:141] libmachine: (ha-821265-m03) DBG | unable to find current IP address of domain ha-821265-m03 in network mk-ha-821265
	I0422 11:09:06.234218   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:09:06.234169   28516 retry.go:31] will retry after 4.176571643s: waiting for machine to come up
	I0422 11:09:10.414050   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:10.414515   27717 main.go:141] libmachine: (ha-821265-m03) DBG | unable to find current IP address of domain ha-821265-m03 in network mk-ha-821265
	I0422 11:09:10.414544   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:09:10.414468   28516 retry.go:31] will retry after 4.838574881s: waiting for machine to come up
	I0422 11:09:15.257405   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:15.257875   27717 main.go:141] libmachine: (ha-821265-m03) Found IP for machine: 192.168.39.95
	I0422 11:09:15.257895   27717 main.go:141] libmachine: (ha-821265-m03) Reserving static IP address...
	I0422 11:09:15.257908   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has current primary IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:15.258261   27717 main.go:141] libmachine: (ha-821265-m03) DBG | unable to find host DHCP lease matching {name: "ha-821265-m03", mac: "52:54:00:24:8e:51", ip: "192.168.39.95"} in network mk-ha-821265
	I0422 11:09:15.335329   27717 main.go:141] libmachine: (ha-821265-m03) Reserved static IP address: 192.168.39.95
	I0422 11:09:15.335356   27717 main.go:141] libmachine: (ha-821265-m03) DBG | Getting to WaitForSSH function...
	I0422 11:09:15.335365   27717 main.go:141] libmachine: (ha-821265-m03) Waiting for SSH to be available...
	I0422 11:09:15.337802   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:15.338310   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:minikube Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:15.338343   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:15.338536   27717 main.go:141] libmachine: (ha-821265-m03) DBG | Using SSH client type: external
	I0422 11:09:15.338576   27717 main.go:141] libmachine: (ha-821265-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03/id_rsa (-rw-------)
	I0422 11:09:15.338626   27717 main.go:141] libmachine: (ha-821265-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.95 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 11:09:15.338650   27717 main.go:141] libmachine: (ha-821265-m03) DBG | About to run SSH command:
	I0422 11:09:15.338665   27717 main.go:141] libmachine: (ha-821265-m03) DBG | exit 0
	I0422 11:09:15.465225   27717 main.go:141] libmachine: (ha-821265-m03) DBG | SSH cmd err, output: <nil>: 
	I0422 11:09:15.465514   27717 main.go:141] libmachine: (ha-821265-m03) KVM machine creation complete!
	I0422 11:09:15.465854   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetConfigRaw
	I0422 11:09:15.466374   27717 main.go:141] libmachine: (ha-821265-m03) Calling .DriverName
	I0422 11:09:15.466566   27717 main.go:141] libmachine: (ha-821265-m03) Calling .DriverName
	I0422 11:09:15.466768   27717 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0422 11:09:15.466786   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetState
	I0422 11:09:15.468053   27717 main.go:141] libmachine: Detecting operating system of created instance...
	I0422 11:09:15.468067   27717 main.go:141] libmachine: Waiting for SSH to be available...
	I0422 11:09:15.468075   27717 main.go:141] libmachine: Getting to WaitForSSH function...
	I0422 11:09:15.468082   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHHostname
	I0422 11:09:15.470630   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:15.470934   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:15.470957   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:15.471103   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHPort
	I0422 11:09:15.471291   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:09:15.471444   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:09:15.471590   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHUsername
	I0422 11:09:15.471729   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:09:15.471979   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0422 11:09:15.471991   27717 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0422 11:09:15.572509   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 11:09:15.572537   27717 main.go:141] libmachine: Detecting the provisioner...
	I0422 11:09:15.572547   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHHostname
	I0422 11:09:15.575283   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:15.575645   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:15.575675   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:15.575761   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHPort
	I0422 11:09:15.575960   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:09:15.576098   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:09:15.576231   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHUsername
	I0422 11:09:15.576433   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:09:15.576591   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0422 11:09:15.576603   27717 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0422 11:09:15.678450   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0422 11:09:15.678523   27717 main.go:141] libmachine: found compatible host: buildroot
	I0422 11:09:15.678539   27717 main.go:141] libmachine: Provisioning with buildroot...
	I0422 11:09:15.678551   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetMachineName
	I0422 11:09:15.678834   27717 buildroot.go:166] provisioning hostname "ha-821265-m03"
	I0422 11:09:15.678859   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetMachineName
	I0422 11:09:15.679062   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHHostname
	I0422 11:09:15.681822   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:15.682177   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:15.682203   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:15.682384   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHPort
	I0422 11:09:15.682568   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:09:15.682727   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:09:15.682868   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHUsername
	I0422 11:09:15.683046   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:09:15.683194   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0422 11:09:15.683205   27717 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-821265-m03 && echo "ha-821265-m03" | sudo tee /etc/hostname
	I0422 11:09:15.806551   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-821265-m03
	
	I0422 11:09:15.806583   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHHostname
	I0422 11:09:15.809699   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:15.810036   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:15.810065   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:15.810201   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHPort
	I0422 11:09:15.810407   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:09:15.810583   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:09:15.810754   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHUsername
	I0422 11:09:15.811031   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:09:15.811223   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0422 11:09:15.811248   27717 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-821265-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-821265-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-821265-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 11:09:15.924445   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 11:09:15.924469   27717 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18711-7633/.minikube CaCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18711-7633/.minikube}
	I0422 11:09:15.924485   27717 buildroot.go:174] setting up certificates
	I0422 11:09:15.924498   27717 provision.go:84] configureAuth start
	I0422 11:09:15.924511   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetMachineName
	I0422 11:09:15.924793   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetIP
	I0422 11:09:15.927506   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:15.927908   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:15.927936   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:15.928122   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHHostname
	I0422 11:09:15.930093   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:15.930413   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:15.930445   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:15.930577   27717 provision.go:143] copyHostCerts
	I0422 11:09:15.930610   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem
	I0422 11:09:15.930646   27717 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem, removing ...
	I0422 11:09:15.930660   27717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem
	I0422 11:09:15.930739   27717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem (1078 bytes)
	I0422 11:09:15.930810   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem
	I0422 11:09:15.930827   27717 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem, removing ...
	I0422 11:09:15.930832   27717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem
	I0422 11:09:15.930860   27717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem (1123 bytes)
	I0422 11:09:15.930908   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem
	I0422 11:09:15.930923   27717 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem, removing ...
	I0422 11:09:15.930927   27717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem
	I0422 11:09:15.930946   27717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem (1679 bytes)
	I0422 11:09:15.930990   27717 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem org=jenkins.ha-821265-m03 san=[127.0.0.1 192.168.39.95 ha-821265-m03 localhost minikube]
	I0422 11:09:16.024553   27717 provision.go:177] copyRemoteCerts
	I0422 11:09:16.024614   27717 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 11:09:16.024637   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHHostname
	I0422 11:09:16.027483   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.027829   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:16.027853   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.028049   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHPort
	I0422 11:09:16.028237   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:09:16.028411   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHUsername
	I0422 11:09:16.028605   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03/id_rsa Username:docker}
	I0422 11:09:16.112900   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0422 11:09:16.112967   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 11:09:16.143585   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0422 11:09:16.143658   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0422 11:09:16.169632   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0422 11:09:16.169713   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 11:09:16.199360   27717 provision.go:87] duration metric: took 274.848144ms to configureAuth
	I0422 11:09:16.199393   27717 buildroot.go:189] setting minikube options for container-runtime
	I0422 11:09:16.199624   27717 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:09:16.199728   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHHostname
	I0422 11:09:16.202554   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.202901   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:16.202935   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.203218   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHPort
	I0422 11:09:16.203402   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:09:16.203558   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:09:16.203662   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHUsername
	I0422 11:09:16.203823   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:09:16.204094   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0422 11:09:16.204122   27717 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 11:09:16.496129   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 11:09:16.496171   27717 main.go:141] libmachine: Checking connection to Docker...
	I0422 11:09:16.496181   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetURL
	I0422 11:09:16.497655   27717 main.go:141] libmachine: (ha-821265-m03) DBG | Using libvirt version 6000000
	I0422 11:09:16.499978   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.500425   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:16.500456   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.500670   27717 main.go:141] libmachine: Docker is up and running!
	I0422 11:09:16.500684   27717 main.go:141] libmachine: Reticulating splines...
	I0422 11:09:16.500690   27717 client.go:171] duration metric: took 24.827536517s to LocalClient.Create
	I0422 11:09:16.500712   27717 start.go:167] duration metric: took 24.827594634s to libmachine.API.Create "ha-821265"
	I0422 11:09:16.500725   27717 start.go:293] postStartSetup for "ha-821265-m03" (driver="kvm2")
	I0422 11:09:16.500738   27717 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 11:09:16.500760   27717 main.go:141] libmachine: (ha-821265-m03) Calling .DriverName
	I0422 11:09:16.501066   27717 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 11:09:16.501094   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHHostname
	I0422 11:09:16.503847   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.504238   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:16.504279   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.504471   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHPort
	I0422 11:09:16.504698   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:09:16.504899   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHUsername
	I0422 11:09:16.505051   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03/id_rsa Username:docker}
	I0422 11:09:16.589038   27717 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 11:09:16.593840   27717 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 11:09:16.593868   27717 filesync.go:126] Scanning /home/jenkins/minikube-integration/18711-7633/.minikube/addons for local assets ...
	I0422 11:09:16.593932   27717 filesync.go:126] Scanning /home/jenkins/minikube-integration/18711-7633/.minikube/files for local assets ...
	I0422 11:09:16.593999   27717 filesync.go:149] local asset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> 149452.pem in /etc/ssl/certs
	I0422 11:09:16.594008   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> /etc/ssl/certs/149452.pem
	I0422 11:09:16.594086   27717 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 11:09:16.605530   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem --> /etc/ssl/certs/149452.pem (1708 bytes)
	I0422 11:09:16.632347   27717 start.go:296] duration metric: took 131.607684ms for postStartSetup
	I0422 11:09:16.632401   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetConfigRaw
	I0422 11:09:16.632992   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetIP
	I0422 11:09:16.635433   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.635726   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:16.635756   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.635999   27717 profile.go:143] Saving config to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/config.json ...
	I0422 11:09:16.636182   27717 start.go:128] duration metric: took 24.981299957s to createHost
	I0422 11:09:16.636205   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHHostname
	I0422 11:09:16.638145   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.638480   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:16.638507   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.638656   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHPort
	I0422 11:09:16.638818   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:09:16.638955   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:09:16.639046   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHUsername
	I0422 11:09:16.639183   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:09:16.639429   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0422 11:09:16.639445   27717 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 11:09:16.742163   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713784156.713110697
	
	I0422 11:09:16.742189   27717 fix.go:216] guest clock: 1713784156.713110697
	I0422 11:09:16.742200   27717 fix.go:229] Guest: 2024-04-22 11:09:16.713110697 +0000 UTC Remote: 2024-04-22 11:09:16.636195909 +0000 UTC m=+159.762522555 (delta=76.914788ms)
	I0422 11:09:16.742222   27717 fix.go:200] guest clock delta is within tolerance: 76.914788ms
	I0422 11:09:16.742230   27717 start.go:83] releasing machines lock for "ha-821265-m03", held for 25.087482422s
	I0422 11:09:16.742258   27717 main.go:141] libmachine: (ha-821265-m03) Calling .DriverName
	I0422 11:09:16.742561   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetIP
	I0422 11:09:16.745430   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.745764   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:16.745794   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.748266   27717 out.go:177] * Found network options:
	I0422 11:09:16.749634   27717 out.go:177]   - NO_PROXY=192.168.39.150,192.168.39.39
	W0422 11:09:16.750980   27717 proxy.go:119] fail to check proxy env: Error ip not in block
	W0422 11:09:16.751009   27717 proxy.go:119] fail to check proxy env: Error ip not in block
	I0422 11:09:16.751029   27717 main.go:141] libmachine: (ha-821265-m03) Calling .DriverName
	I0422 11:09:16.751641   27717 main.go:141] libmachine: (ha-821265-m03) Calling .DriverName
	I0422 11:09:16.751874   27717 main.go:141] libmachine: (ha-821265-m03) Calling .DriverName
	I0422 11:09:16.751979   27717 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 11:09:16.752018   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHHostname
	W0422 11:09:16.752104   27717 proxy.go:119] fail to check proxy env: Error ip not in block
	W0422 11:09:16.752131   27717 proxy.go:119] fail to check proxy env: Error ip not in block
	I0422 11:09:16.752212   27717 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 11:09:16.752236   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHHostname
	I0422 11:09:16.754931   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.755141   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.755354   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:16.755388   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.755520   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:16.755547   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.755793   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHPort
	I0422 11:09:16.755898   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHPort
	I0422 11:09:16.755971   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:09:16.756069   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:09:16.756130   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHUsername
	I0422 11:09:16.756188   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHUsername
	I0422 11:09:16.756370   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03/id_rsa Username:docker}
	I0422 11:09:16.756383   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03/id_rsa Username:docker}
	I0422 11:09:16.995055   27717 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 11:09:17.003159   27717 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 11:09:17.003265   27717 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 11:09:17.022246   27717 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 11:09:17.022274   27717 start.go:494] detecting cgroup driver to use...
	I0422 11:09:17.022344   27717 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 11:09:17.039766   27717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 11:09:17.055183   27717 docker.go:217] disabling cri-docker service (if available) ...
	I0422 11:09:17.055249   27717 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 11:09:17.071071   27717 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 11:09:17.086203   27717 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 11:09:17.212333   27717 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 11:09:17.401337   27717 docker.go:233] disabling docker service ...
	I0422 11:09:17.401418   27717 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 11:09:17.421314   27717 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 11:09:17.438204   27717 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 11:09:17.565481   27717 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 11:09:17.701482   27717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 11:09:17.719346   27717 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 11:09:17.742002   27717 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 11:09:17.742069   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:09:17.754885   27717 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 11:09:17.754944   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:09:17.769590   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:09:17.784142   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:09:17.796657   27717 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 11:09:17.811165   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:09:17.827414   27717 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:09:17.849119   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:09:17.862638   27717 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 11:09:17.874610   27717 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 11:09:17.874676   27717 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 11:09:17.891831   27717 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 11:09:17.904059   27717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 11:09:18.028167   27717 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 11:09:18.190198   27717 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 11:09:18.190273   27717 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 11:09:18.196461   27717 start.go:562] Will wait 60s for crictl version
	I0422 11:09:18.196533   27717 ssh_runner.go:195] Run: which crictl
	I0422 11:09:18.200973   27717 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 11:09:18.241976   27717 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 11:09:18.242058   27717 ssh_runner.go:195] Run: crio --version
	I0422 11:09:18.276722   27717 ssh_runner.go:195] Run: crio --version
	I0422 11:09:18.312736   27717 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 11:09:18.314312   27717 out.go:177]   - env NO_PROXY=192.168.39.150
	I0422 11:09:18.315777   27717 out.go:177]   - env NO_PROXY=192.168.39.150,192.168.39.39
	I0422 11:09:18.317079   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetIP
	I0422 11:09:18.319814   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:18.320279   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:18.320306   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:18.320528   27717 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0422 11:09:18.325438   27717 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 11:09:18.340224   27717 mustload.go:65] Loading cluster: ha-821265
	I0422 11:09:18.340479   27717 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:09:18.340720   27717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:09:18.340792   27717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:09:18.355733   27717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40399
	I0422 11:09:18.356170   27717 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:09:18.356643   27717 main.go:141] libmachine: Using API Version  1
	I0422 11:09:18.356659   27717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:09:18.356957   27717 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:09:18.357205   27717 main.go:141] libmachine: (ha-821265) Calling .GetState
	I0422 11:09:18.359041   27717 host.go:66] Checking if "ha-821265" exists ...
	I0422 11:09:18.359383   27717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:09:18.359422   27717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:09:18.374945   27717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43221
	I0422 11:09:18.375355   27717 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:09:18.375881   27717 main.go:141] libmachine: Using API Version  1
	I0422 11:09:18.375907   27717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:09:18.376247   27717 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:09:18.376465   27717 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:09:18.376621   27717 certs.go:68] Setting up /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265 for IP: 192.168.39.95
	I0422 11:09:18.376642   27717 certs.go:194] generating shared ca certs ...
	I0422 11:09:18.376662   27717 certs.go:226] acquiring lock for ca certs: {Name:mk0b77082b88c771d0b00be5267ca31dfee6f85a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:09:18.376828   27717 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key
	I0422 11:09:18.376887   27717 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key
	I0422 11:09:18.376899   27717 certs.go:256] generating profile certs ...
	I0422 11:09:18.376967   27717 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/client.key
	I0422 11:09:18.376994   27717 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key.31f49dce
	I0422 11:09:18.377008   27717 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt.31f49dce with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.150 192.168.39.39 192.168.39.95 192.168.39.254]
	I0422 11:09:18.586174   27717 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt.31f49dce ...
	I0422 11:09:18.586202   27717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt.31f49dce: {Name:mk0abe473282f1560348550eacbe3ea6fdc28112 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:09:18.586359   27717 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key.31f49dce ...
	I0422 11:09:18.586372   27717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key.31f49dce: {Name:mka3b0906da84245b52f3e9ec6c525d09b33b6e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:09:18.586445   27717 certs.go:381] copying /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt.31f49dce -> /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt
	I0422 11:09:18.586567   27717 certs.go:385] copying /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key.31f49dce -> /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key
	I0422 11:09:18.586683   27717 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.key
	I0422 11:09:18.586698   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0422 11:09:18.586710   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0422 11:09:18.586723   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0422 11:09:18.586736   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0422 11:09:18.586748   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0422 11:09:18.586760   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0422 11:09:18.586772   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0422 11:09:18.586784   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0422 11:09:18.586843   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem (1338 bytes)
	W0422 11:09:18.586868   27717 certs.go:480] ignoring /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945_empty.pem, impossibly tiny 0 bytes
	I0422 11:09:18.586877   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem (1679 bytes)
	I0422 11:09:18.586898   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem (1078 bytes)
	I0422 11:09:18.586918   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem (1123 bytes)
	I0422 11:09:18.586937   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem (1679 bytes)
	I0422 11:09:18.586971   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem (1708 bytes)
	I0422 11:09:18.586995   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:09:18.587012   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem -> /usr/share/ca-certificates/14945.pem
	I0422 11:09:18.587023   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> /usr/share/ca-certificates/149452.pem
	I0422 11:09:18.587052   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:09:18.589945   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:09:18.590343   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:09:18.590371   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:09:18.590552   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:09:18.590726   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:09:18.590870   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:09:18.591045   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:09:18.665233   27717 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0422 11:09:18.671728   27717 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0422 11:09:18.685400   27717 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0422 11:09:18.690495   27717 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0422 11:09:18.704853   27717 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0422 11:09:18.710158   27717 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0422 11:09:18.723765   27717 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0422 11:09:18.729225   27717 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0422 11:09:18.743212   27717 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0422 11:09:18.748424   27717 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0422 11:09:18.763219   27717 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0422 11:09:18.770678   27717 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0422 11:09:18.786190   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 11:09:18.819799   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 11:09:18.852375   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 11:09:18.882081   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0422 11:09:18.911140   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0422 11:09:18.939772   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0422 11:09:18.968018   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 11:09:18.997132   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 11:09:19.026284   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 11:09:19.054169   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem --> /usr/share/ca-certificates/14945.pem (1338 bytes)
	I0422 11:09:19.086736   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem --> /usr/share/ca-certificates/149452.pem (1708 bytes)
	I0422 11:09:19.118084   27717 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0422 11:09:19.139533   27717 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0422 11:09:19.159259   27717 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0422 11:09:19.178452   27717 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0422 11:09:19.199458   27717 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0422 11:09:19.221696   27717 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0422 11:09:19.241488   27717 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0422 11:09:19.261496   27717 ssh_runner.go:195] Run: openssl version
	I0422 11:09:19.268162   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14945.pem && ln -fs /usr/share/ca-certificates/14945.pem /etc/ssl/certs/14945.pem"
	I0422 11:09:19.280231   27717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14945.pem
	I0422 11:09:19.285564   27717 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 10:51 /usr/share/ca-certificates/14945.pem
	I0422 11:09:19.285614   27717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14945.pem
	I0422 11:09:19.292158   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14945.pem /etc/ssl/certs/51391683.0"
	I0422 11:09:19.304177   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149452.pem && ln -fs /usr/share/ca-certificates/149452.pem /etc/ssl/certs/149452.pem"
	I0422 11:09:19.317794   27717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149452.pem
	I0422 11:09:19.323152   27717 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 10:51 /usr/share/ca-certificates/149452.pem
	I0422 11:09:19.323208   27717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149452.pem
	I0422 11:09:19.330146   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149452.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 11:09:19.342884   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 11:09:19.356685   27717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:09:19.362191   27717 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:09:19.362241   27717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:09:19.368802   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 11:09:19.381512   27717 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 11:09:19.386404   27717 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0422 11:09:19.386454   27717 kubeadm.go:928] updating node {m03 192.168.39.95 8443 v1.30.0 crio true true} ...
	I0422 11:09:19.386529   27717 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-821265-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-821265 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 11:09:19.386550   27717 kube-vip.go:111] generating kube-vip config ...
	I0422 11:09:19.386599   27717 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0422 11:09:19.406645   27717 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0422 11:09:19.406727   27717 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0422 11:09:19.406814   27717 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 11:09:19.419228   27717 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0422 11:09:19.419300   27717 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0422 11:09:19.431887   27717 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0422 11:09:19.431916   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0422 11:09:19.431916   27717 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0422 11:09:19.431935   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0422 11:09:19.431887   27717 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0422 11:09:19.431991   27717 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0422 11:09:19.431992   27717 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0422 11:09:19.432012   27717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:09:19.448974   27717 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0422 11:09:19.449015   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0422 11:09:19.449027   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0422 11:09:19.449059   27717 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0422 11:09:19.449080   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0422 11:09:19.449116   27717 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0422 11:09:19.486783   27717 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0422 11:09:19.486824   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0422 11:09:20.595730   27717 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0422 11:09:20.606456   27717 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0422 11:09:20.626239   27717 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 11:09:20.645685   27717 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0422 11:09:20.665373   27717 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0422 11:09:20.670422   27717 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 11:09:20.685691   27717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 11:09:20.825824   27717 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 11:09:20.845457   27717 host.go:66] Checking if "ha-821265" exists ...
	I0422 11:09:20.845784   27717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:09:20.845821   27717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:09:20.861313   27717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44447
	I0422 11:09:20.862223   27717 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:09:20.862765   27717 main.go:141] libmachine: Using API Version  1
	I0422 11:09:20.862789   27717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:09:20.863111   27717 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:09:20.863326   27717 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:09:20.863507   27717 start.go:316] joinCluster: &{Name:ha-821265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-821265 Namespace:defau
lt APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:fal
se istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 11:09:20.863617   27717 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0422 11:09:20.863636   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:09:20.867189   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:09:20.867773   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:09:20.867802   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:09:20.868010   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:09:20.868195   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:09:20.868409   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:09:20.868571   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:09:21.044309   27717 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 11:09:21.044362   27717 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8cpnhy.fsuqlvdl5mdoaw2l --discovery-token-ca-cert-hash sha256:9af0990320e7d045d6d22d7e7276e7343f7b0ca44ae792f4916b4a79d803646f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-821265-m03 --control-plane --apiserver-advertise-address=192.168.39.95 --apiserver-bind-port=8443"
	I0422 11:09:45.816130   27717 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8cpnhy.fsuqlvdl5mdoaw2l --discovery-token-ca-cert-hash sha256:9af0990320e7d045d6d22d7e7276e7343f7b0ca44ae792f4916b4a79d803646f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-821265-m03 --control-plane --apiserver-advertise-address=192.168.39.95 --apiserver-bind-port=8443": (24.7717447s)
	I0422 11:09:45.816167   27717 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0422 11:09:46.534130   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-821265-m03 minikube.k8s.io/updated_at=2024_04_22T11_09_46_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437 minikube.k8s.io/name=ha-821265 minikube.k8s.io/primary=false
	I0422 11:09:46.650185   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-821265-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0422 11:09:46.782667   27717 start.go:318] duration metric: took 25.91915592s to joinCluster
	I0422 11:09:46.782754   27717 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 11:09:46.784737   27717 out.go:177] * Verifying Kubernetes components...
	I0422 11:09:46.783108   27717 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:09:46.786691   27717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 11:09:47.065586   27717 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 11:09:47.126472   27717 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18711-7633/kubeconfig
	I0422 11:09:47.126805   27717 kapi.go:59] client config for ha-821265: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/client.crt", KeyFile:"/home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/client.key", CAFile:"/home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0422 11:09:47.126904   27717 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.150:8443
	I0422 11:09:47.127221   27717 node_ready.go:35] waiting up to 6m0s for node "ha-821265-m03" to be "Ready" ...
	I0422 11:09:47.127305   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:47.127316   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:47.127326   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:47.127335   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:47.139056   27717 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0422 11:09:47.628225   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:47.628244   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:47.628252   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:47.628256   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:47.632295   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:09:48.128444   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:48.128473   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:48.128486   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:48.128493   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:48.132396   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:48.627458   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:48.627483   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:48.627495   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:48.627500   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:48.631537   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:09:49.128100   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:49.128123   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:49.128131   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:49.128135   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:49.132070   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:49.132722   27717 node_ready.go:53] node "ha-821265-m03" has status "Ready":"False"
	I0422 11:09:49.627810   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:49.627836   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:49.627846   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:49.627851   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:49.631389   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:50.127408   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:50.127555   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:50.127579   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:50.127588   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:50.131608   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:50.627496   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:50.627518   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:50.627526   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:50.627530   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:50.633238   27717 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0422 11:09:51.127897   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:51.127925   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:51.127936   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:51.127942   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:51.131758   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:51.627972   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:51.627992   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:51.627999   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:51.628003   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:51.631053   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:51.631812   27717 node_ready.go:53] node "ha-821265-m03" has status "Ready":"False"
	I0422 11:09:52.128060   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:52.128080   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:52.128088   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:52.128091   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:52.132044   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:52.628377   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:52.628400   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:52.628408   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:52.628412   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:52.632264   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:53.128388   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:53.128407   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:53.128416   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:53.128421   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:53.133077   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:09:53.628230   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:53.628254   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:53.628264   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:53.628269   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:53.631791   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:53.633300   27717 node_ready.go:53] node "ha-821265-m03" has status "Ready":"False"
	I0422 11:09:54.128058   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:54.128086   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:54.128094   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:54.128099   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:54.131943   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:54.627964   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:54.627985   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:54.627994   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:54.627998   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:54.631842   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:55.127908   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:55.127929   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:55.127936   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:55.127939   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:55.132024   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:09:55.133067   27717 node_ready.go:49] node "ha-821265-m03" has status "Ready":"True"
	I0422 11:09:55.133091   27717 node_ready.go:38] duration metric: took 8.005847302s for node "ha-821265-m03" to be "Ready" ...
	I0422 11:09:55.133102   27717 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 11:09:55.133179   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0422 11:09:55.133192   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:55.133203   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:55.133224   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:55.141303   27717 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0422 11:09:55.148059   27717 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ft2jl" in "kube-system" namespace to be "Ready" ...
	I0422 11:09:55.148131   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ft2jl
	I0422 11:09:55.148143   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:55.148150   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:55.148154   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:55.151339   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:55.152110   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:09:55.152125   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:55.152132   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:55.152135   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:55.155067   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:09:55.155832   27717 pod_ready.go:92] pod "coredns-7db6d8ff4d-ft2jl" in "kube-system" namespace has status "Ready":"True"
	I0422 11:09:55.155847   27717 pod_ready.go:81] duration metric: took 7.763906ms for pod "coredns-7db6d8ff4d-ft2jl" in "kube-system" namespace to be "Ready" ...
	I0422 11:09:55.155855   27717 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ht7jl" in "kube-system" namespace to be "Ready" ...
	I0422 11:09:55.155897   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ht7jl
	I0422 11:09:55.155907   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:55.155914   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:55.155917   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:55.158817   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:09:55.159579   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:09:55.159591   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:55.159597   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:55.159601   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:55.162388   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:09:55.162982   27717 pod_ready.go:92] pod "coredns-7db6d8ff4d-ht7jl" in "kube-system" namespace has status "Ready":"True"
	I0422 11:09:55.163003   27717 pod_ready.go:81] duration metric: took 7.140664ms for pod "coredns-7db6d8ff4d-ht7jl" in "kube-system" namespace to be "Ready" ...
	I0422 11:09:55.163015   27717 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-821265" in "kube-system" namespace to be "Ready" ...
	I0422 11:09:55.163078   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265
	I0422 11:09:55.163089   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:55.163096   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:55.163101   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:55.166984   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:55.167590   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:09:55.167603   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:55.167616   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:55.167621   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:55.170261   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:09:55.170809   27717 pod_ready.go:92] pod "etcd-ha-821265" in "kube-system" namespace has status "Ready":"True"
	I0422 11:09:55.170824   27717 pod_ready.go:81] duration metric: took 7.801021ms for pod "etcd-ha-821265" in "kube-system" namespace to be "Ready" ...
	I0422 11:09:55.170831   27717 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-821265-m02" in "kube-system" namespace to be "Ready" ...
	I0422 11:09:55.170881   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m02
	I0422 11:09:55.170890   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:55.170897   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:55.170900   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:55.173959   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:55.174967   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:09:55.175012   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:55.175031   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:55.175039   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:55.179210   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:09:55.180554   27717 pod_ready.go:92] pod "etcd-ha-821265-m02" in "kube-system" namespace has status "Ready":"True"
	I0422 11:09:55.180569   27717 pod_ready.go:81] duration metric: took 9.73166ms for pod "etcd-ha-821265-m02" in "kube-system" namespace to be "Ready" ...
	I0422 11:09:55.180577   27717 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-821265-m03" in "kube-system" namespace to be "Ready" ...
	I0422 11:09:55.328927   27717 request.go:629] Waited for 148.302005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m03
	I0422 11:09:55.328983   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m03
	I0422 11:09:55.328988   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:55.328996   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:55.329002   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:55.332451   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:55.528632   27717 request.go:629] Waited for 195.409677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:55.528695   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:55.528707   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:55.528718   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:55.528726   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:55.535731   27717 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0422 11:09:55.728896   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m03
	I0422 11:09:55.728919   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:55.728928   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:55.728934   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:55.732722   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:55.928396   27717 request.go:629] Waited for 194.410291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:55.928472   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:55.928479   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:55.928490   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:55.928503   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:55.931979   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:56.181777   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m03
	I0422 11:09:56.181799   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:56.181830   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:56.181839   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:56.185731   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:56.327972   27717 request.go:629] Waited for 141.220617ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:56.328022   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:56.328028   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:56.328035   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:56.328042   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:56.332996   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:09:56.681455   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m03
	I0422 11:09:56.681479   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:56.681487   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:56.681491   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:56.685257   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:56.728520   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:56.728549   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:56.728561   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:56.728569   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:56.745726   27717 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0422 11:09:57.181343   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m03
	I0422 11:09:57.181366   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:57.181374   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:57.181378   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:57.184555   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:57.185663   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:57.185682   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:57.185688   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:57.185692   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:57.188717   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:57.189405   27717 pod_ready.go:102] pod "etcd-ha-821265-m03" in "kube-system" namespace has status "Ready":"False"
	I0422 11:09:57.681048   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m03
	I0422 11:09:57.681074   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:57.681085   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:57.681097   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:57.684566   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:57.685854   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:57.685870   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:57.685877   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:57.685882   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:57.689062   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:58.180918   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m03
	I0422 11:09:58.180952   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:58.180963   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:58.180970   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:58.183926   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:09:58.184547   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:58.184561   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:58.184572   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:58.184581   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:58.187509   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:09:58.680994   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m03
	I0422 11:09:58.681017   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:58.681024   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:58.681029   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:58.685012   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:58.685695   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:58.685714   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:58.685725   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:58.685730   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:58.688908   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:59.180741   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m03
	I0422 11:09:59.180762   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:59.180787   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:59.180792   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:59.184823   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:09:59.185665   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:59.185685   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:59.185696   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:59.185701   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:59.188861   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:59.189395   27717 pod_ready.go:92] pod "etcd-ha-821265-m03" in "kube-system" namespace has status "Ready":"True"
	I0422 11:09:59.189409   27717 pod_ready.go:81] duration metric: took 4.008826567s for pod "etcd-ha-821265-m03" in "kube-system" namespace to be "Ready" ...
	I0422 11:09:59.189427   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-821265" in "kube-system" namespace to be "Ready" ...
	I0422 11:09:59.189478   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-821265
	I0422 11:09:59.189487   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:59.189494   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:59.189497   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:59.192943   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:59.193737   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:09:59.193754   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:59.193765   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:59.193775   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:59.196429   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:09:59.197417   27717 pod_ready.go:92] pod "kube-apiserver-ha-821265" in "kube-system" namespace has status "Ready":"True"
	I0422 11:09:59.197432   27717 pod_ready.go:81] duration metric: took 7.996435ms for pod "kube-apiserver-ha-821265" in "kube-system" namespace to be "Ready" ...
	I0422 11:09:59.197440   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-821265-m02" in "kube-system" namespace to be "Ready" ...
	I0422 11:09:59.328829   27717 request.go:629] Waited for 131.304831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-821265-m02
	I0422 11:09:59.328882   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-821265-m02
	I0422 11:09:59.328887   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:59.328894   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:59.328899   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:59.332794   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:59.528022   27717 request.go:629] Waited for 194.201432ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:09:59.528098   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:09:59.528106   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:59.528115   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:59.528125   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:59.532398   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:09:59.533198   27717 pod_ready.go:92] pod "kube-apiserver-ha-821265-m02" in "kube-system" namespace has status "Ready":"True"
	I0422 11:09:59.533216   27717 pod_ready.go:81] duration metric: took 335.771232ms for pod "kube-apiserver-ha-821265-m02" in "kube-system" namespace to be "Ready" ...
	I0422 11:09:59.533225   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-821265-m03" in "kube-system" namespace to be "Ready" ...
	I0422 11:09:59.728352   27717 request.go:629] Waited for 195.060151ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-821265-m03
	I0422 11:09:59.728408   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-821265-m03
	I0422 11:09:59.728413   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:59.728420   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:59.728425   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:59.732114   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:59.928310   27717 request.go:629] Waited for 195.371996ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:59.928363   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:59.928368   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:59.928375   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:59.928381   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:59.931934   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:59.932490   27717 pod_ready.go:92] pod "kube-apiserver-ha-821265-m03" in "kube-system" namespace has status "Ready":"True"
	I0422 11:09:59.932510   27717 pod_ready.go:81] duration metric: took 399.279596ms for pod "kube-apiserver-ha-821265-m03" in "kube-system" namespace to be "Ready" ...
	I0422 11:09:59.932520   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-821265" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:00.128683   27717 request.go:629] Waited for 196.072405ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-821265
	I0422 11:10:00.128749   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-821265
	I0422 11:10:00.128756   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:00.128768   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:00.128793   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:00.134879   27717 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0422 11:10:00.328214   27717 request.go:629] Waited for 191.35653ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:10:00.328265   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:10:00.328270   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:00.328277   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:00.328281   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:00.332026   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:00.332684   27717 pod_ready.go:92] pod "kube-controller-manager-ha-821265" in "kube-system" namespace has status "Ready":"True"
	I0422 11:10:00.332701   27717 pod_ready.go:81] duration metric: took 400.174492ms for pod "kube-controller-manager-ha-821265" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:00.332713   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-821265-m02" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:00.528856   27717 request.go:629] Waited for 196.071774ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-821265-m02
	I0422 11:10:00.528928   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-821265-m02
	I0422 11:10:00.528933   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:00.528940   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:00.528945   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:00.533521   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:10:00.728986   27717 request.go:629] Waited for 194.304056ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:10:00.729068   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:10:00.729076   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:00.729087   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:00.729094   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:00.732973   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:00.733573   27717 pod_ready.go:92] pod "kube-controller-manager-ha-821265-m02" in "kube-system" namespace has status "Ready":"True"
	I0422 11:10:00.733594   27717 pod_ready.go:81] duration metric: took 400.873731ms for pod "kube-controller-manager-ha-821265-m02" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:00.733603   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-821265-m03" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:00.928318   27717 request.go:629] Waited for 194.651614ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-821265-m03
	I0422 11:10:00.928378   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-821265-m03
	I0422 11:10:00.928383   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:00.928390   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:00.928395   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:00.932259   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:01.128530   27717 request.go:629] Waited for 195.398787ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:10:01.128618   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:10:01.128629   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:01.128639   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:01.128651   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:01.132230   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:01.328424   27717 request.go:629] Waited for 94.271832ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-821265-m03
	I0422 11:10:01.328528   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-821265-m03
	I0422 11:10:01.328549   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:01.328561   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:01.328572   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:01.332158   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:01.528495   27717 request.go:629] Waited for 195.503425ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:10:01.528548   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:10:01.528554   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:01.528565   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:01.528571   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:01.538438   27717 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0422 11:10:01.734068   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-821265-m03
	I0422 11:10:01.734092   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:01.734104   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:01.734109   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:01.738513   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:10:01.928571   27717 request.go:629] Waited for 189.028209ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:10:01.928645   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:10:01.928672   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:01.928680   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:01.928687   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:01.932382   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:02.234774   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-821265-m03
	I0422 11:10:02.234802   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:02.234814   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:02.234821   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:02.240337   27717 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0422 11:10:02.328656   27717 request.go:629] Waited for 87.297571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:10:02.328727   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:10:02.328734   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:02.328758   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:02.328788   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:02.332414   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:02.734739   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-821265-m03
	I0422 11:10:02.734760   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:02.734774   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:02.734787   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:02.738826   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:10:02.739835   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:10:02.739853   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:02.739860   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:02.739863   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:02.743047   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:02.743632   27717 pod_ready.go:102] pod "kube-controller-manager-ha-821265-m03" in "kube-system" namespace has status "Ready":"False"
	I0422 11:10:03.233882   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-821265-m03
	I0422 11:10:03.233910   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:03.233919   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:03.233923   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:03.238095   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:10:03.239086   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:10:03.239107   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:03.239119   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:03.239124   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:03.242697   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:03.734022   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-821265-m03
	I0422 11:10:03.734043   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:03.734048   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:03.734052   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:03.738415   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:10:03.739174   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:10:03.739188   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:03.739195   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:03.739200   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:03.742528   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:04.234024   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-821265-m03
	I0422 11:10:04.234045   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:04.234053   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:04.234058   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:04.238064   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:04.238672   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:10:04.238692   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:04.238701   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:04.238708   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:04.241801   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:04.242490   27717 pod_ready.go:92] pod "kube-controller-manager-ha-821265-m03" in "kube-system" namespace has status "Ready":"True"
	I0422 11:10:04.242517   27717 pod_ready.go:81] duration metric: took 3.508907065s for pod "kube-controller-manager-ha-821265-m03" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:04.242530   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j2hpk" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:04.242597   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j2hpk
	I0422 11:10:04.242609   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:04.242618   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:04.242623   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:04.245689   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:04.328738   27717 request.go:629] Waited for 82.253896ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:10:04.328861   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:10:04.328872   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:04.328879   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:04.328884   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:04.332660   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:04.333455   27717 pod_ready.go:92] pod "kube-proxy-j2hpk" in "kube-system" namespace has status "Ready":"True"
	I0422 11:10:04.333477   27717 pod_ready.go:81] duration metric: took 90.940541ms for pod "kube-proxy-j2hpk" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:04.333486   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lmhp7" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:04.528909   27717 request.go:629] Waited for 195.350003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lmhp7
	I0422 11:10:04.528960   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lmhp7
	I0422 11:10:04.528965   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:04.528972   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:04.528977   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:04.533521   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:10:04.728595   27717 request.go:629] Waited for 194.421308ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:10:04.728664   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:10:04.728672   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:04.728683   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:04.728688   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:04.732667   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:04.733609   27717 pod_ready.go:92] pod "kube-proxy-lmhp7" in "kube-system" namespace has status "Ready":"True"
	I0422 11:10:04.733631   27717 pod_ready.go:81] duration metric: took 400.138637ms for pod "kube-proxy-lmhp7" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:04.733641   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w7r9d" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:04.928836   27717 request.go:629] Waited for 195.095072ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w7r9d
	I0422 11:10:04.928909   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w7r9d
	I0422 11:10:04.928920   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:04.928935   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:04.928943   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:04.933134   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:10:05.128343   27717 request.go:629] Waited for 194.398682ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:10:05.128436   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:10:05.128443   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:05.128450   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:05.128457   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:05.132814   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:10:05.133630   27717 pod_ready.go:92] pod "kube-proxy-w7r9d" in "kube-system" namespace has status "Ready":"True"
	I0422 11:10:05.133649   27717 pod_ready.go:81] duration metric: took 400.001653ms for pod "kube-proxy-w7r9d" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:05.133658   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-821265" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:05.328876   27717 request.go:629] Waited for 195.125957ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-821265
	I0422 11:10:05.328943   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-821265
	I0422 11:10:05.328951   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:05.328962   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:05.328971   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:05.332942   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:05.528399   27717 request.go:629] Waited for 194.35396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:10:05.528491   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:10:05.528502   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:05.528509   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:05.528515   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:05.532124   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:05.532922   27717 pod_ready.go:92] pod "kube-scheduler-ha-821265" in "kube-system" namespace has status "Ready":"True"
	I0422 11:10:05.532944   27717 pod_ready.go:81] duration metric: took 399.278603ms for pod "kube-scheduler-ha-821265" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:05.532956   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-821265-m02" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:05.727969   27717 request.go:629] Waited for 194.954055ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-821265-m02
	I0422 11:10:05.728066   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-821265-m02
	I0422 11:10:05.728078   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:05.728089   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:05.728100   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:05.731528   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:05.928856   27717 request.go:629] Waited for 196.426732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:10:05.928913   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:10:05.928918   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:05.928925   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:05.928929   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:05.932832   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:05.933393   27717 pod_ready.go:92] pod "kube-scheduler-ha-821265-m02" in "kube-system" namespace has status "Ready":"True"
	I0422 11:10:05.933410   27717 pod_ready.go:81] duration metric: took 400.447952ms for pod "kube-scheduler-ha-821265-m02" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:05.933419   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-821265-m03" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:06.128597   27717 request.go:629] Waited for 195.116076ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-821265-m03
	I0422 11:10:06.128669   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-821265-m03
	I0422 11:10:06.128674   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:06.128681   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:06.128689   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:06.134971   27717 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0422 11:10:06.328097   27717 request.go:629] Waited for 192.2814ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:10:06.328160   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:10:06.328165   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:06.328173   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:06.328178   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:06.331467   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:06.332150   27717 pod_ready.go:92] pod "kube-scheduler-ha-821265-m03" in "kube-system" namespace has status "Ready":"True"
	I0422 11:10:06.332169   27717 pod_ready.go:81] duration metric: took 398.74421ms for pod "kube-scheduler-ha-821265-m03" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:06.332181   27717 pod_ready.go:38] duration metric: took 11.199068135s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 11:10:06.332193   27717 api_server.go:52] waiting for apiserver process to appear ...
	I0422 11:10:06.332242   27717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 11:10:06.350635   27717 api_server.go:72] duration metric: took 19.567842113s to wait for apiserver process to appear ...
	I0422 11:10:06.350664   27717 api_server.go:88] waiting for apiserver healthz status ...
	I0422 11:10:06.350685   27717 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I0422 11:10:06.356504   27717 api_server.go:279] https://192.168.39.150:8443/healthz returned 200:
	ok
	I0422 11:10:06.356574   27717 round_trippers.go:463] GET https://192.168.39.150:8443/version
	I0422 11:10:06.356583   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:06.356591   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:06.356600   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:06.357536   27717 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0422 11:10:06.357607   27717 api_server.go:141] control plane version: v1.30.0
	I0422 11:10:06.357625   27717 api_server.go:131] duration metric: took 6.954129ms to wait for apiserver health ...
	I0422 11:10:06.357637   27717 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 11:10:06.528362   27717 request.go:629] Waited for 170.649697ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0422 11:10:06.528425   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0422 11:10:06.528432   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:06.528442   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:06.528453   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:06.556565   27717 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0422 11:10:06.566367   27717 system_pods.go:59] 24 kube-system pods found
	I0422 11:10:06.566402   27717 system_pods.go:61] "coredns-7db6d8ff4d-ft2jl" [09e14815-b8e9-4b60-9b2c-c7d86cccb594] Running
	I0422 11:10:06.566408   27717 system_pods.go:61] "coredns-7db6d8ff4d-ht7jl" [c404a830-ddce-4c49-9e54-05d45871b4b0] Running
	I0422 11:10:06.566412   27717 system_pods.go:61] "etcd-ha-821265" [1a27ab5d-19af-49d9-8eb3-e50b7e2225a5] Running
	I0422 11:10:06.566417   27717 system_pods.go:61] "etcd-ha-821265-m02" [4ba0de26-81d6-423b-a5a4-9fd88c90ebdc] Running
	I0422 11:10:06.566422   27717 system_pods.go:61] "etcd-ha-821265-m03" [43ef0886-3651-4313-847d-ee6cd15ec411] Running
	I0422 11:10:06.566427   27717 system_pods.go:61] "kindnet-d8qgr" [ec965a08-bffa-46ef-8edf-a3f29cb9b474] Running
	I0422 11:10:06.566431   27717 system_pods.go:61] "kindnet-jm2pd" [0550a9db-b106-4ac4-9976-118d80927509] Running
	I0422 11:10:06.566435   27717 system_pods.go:61] "kindnet-qbq9z" [9751a17f-e26b-4ba8-81ce-077103c0aa1c] Running
	I0422 11:10:06.566440   27717 system_pods.go:61] "kube-apiserver-ha-821265" [1e20fb49-c54d-49fd-900b-38e347a52f9a] Running
	I0422 11:10:06.566445   27717 system_pods.go:61] "kube-apiserver-ha-821265-m02" [95616042-7a05-4fc3-a1ef-7fd56c8b3cd8] Running
	I0422 11:10:06.566450   27717 system_pods.go:61] "kube-apiserver-ha-821265-m03" [d2cd8a48-ff79-48cd-9096-99c240d07879] Running
	I0422 11:10:06.566455   27717 system_pods.go:61] "kube-controller-manager-ha-821265" [51933fc1-af7c-4fb0-b811-b6312f4b4d29] Running
	I0422 11:10:06.566460   27717 system_pods.go:61] "kube-controller-manager-ha-821265-m02" [4af2c432-4c7c-4f1f-98da-34af2648d7db] Running
	I0422 11:10:06.566465   27717 system_pods.go:61] "kube-controller-manager-ha-821265-m03" [06ea7b1f-409d-43a6-9493-bc4c24f3f536] Running
	I0422 11:10:06.566471   27717 system_pods.go:61] "kube-proxy-j2hpk" [3ebf4ab0-bc76-4f5c-916e-6b28a81dc031] Running
	I0422 11:10:06.566478   27717 system_pods.go:61] "kube-proxy-lmhp7" [45383871-e744-4764-823a-060a498ebc51] Running
	I0422 11:10:06.566483   27717 system_pods.go:61] "kube-proxy-w7r9d" [56a4f7fc-5ce0-4d77-b30f-9d39cded457c] Running
	I0422 11:10:06.566488   27717 system_pods.go:61] "kube-scheduler-ha-821265" [929e0c00-c49a-4b96-8f6a-7a84ae4f117c] Running
	I0422 11:10:06.566499   27717 system_pods.go:61] "kube-scheduler-ha-821265-m02" [589c30c7-d9df-4745-bdb3-87ae02ab2b67] Running
	I0422 11:10:06.566504   27717 system_pods.go:61] "kube-scheduler-ha-821265-m03" [d57674c8-cc46-4da5-9be1-01675f656b35] Running
	I0422 11:10:06.566511   27717 system_pods.go:61] "kube-vip-ha-821265" [9322f0ee-9e3e-4585-9388-44ccd1417371] Running
	I0422 11:10:06.566516   27717 system_pods.go:61] "kube-vip-ha-821265-m02" [466697de-7dbe-4e6c-be95-9463a9548cde] Running
	I0422 11:10:06.566524   27717 system_pods.go:61] "kube-vip-ha-821265-m03" [a4b446ae-5369-4b1e-bd82-be6fb4110c4c] Running
	I0422 11:10:06.566528   27717 system_pods.go:61] "storage-provisioner" [4b44da93-f3fa-49b7-a701-5ab7a430374f] Running
	I0422 11:10:06.566538   27717 system_pods.go:74] duration metric: took 208.894811ms to wait for pod list to return data ...
	I0422 11:10:06.566555   27717 default_sa.go:34] waiting for default service account to be created ...
	I0422 11:10:06.728319   27717 request.go:629] Waited for 161.692929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/default/serviceaccounts
	I0422 11:10:06.728371   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/default/serviceaccounts
	I0422 11:10:06.728376   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:06.728383   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:06.728387   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:06.731764   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:06.731867   27717 default_sa.go:45] found service account: "default"
	I0422 11:10:06.731884   27717 default_sa.go:55] duration metric: took 165.321362ms for default service account to be created ...
	I0422 11:10:06.731893   27717 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 11:10:06.928504   27717 request.go:629] Waited for 196.544322ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0422 11:10:06.928576   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0422 11:10:06.928582   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:06.928593   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:06.928597   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:06.936268   27717 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0422 11:10:06.943113   27717 system_pods.go:86] 24 kube-system pods found
	I0422 11:10:06.943142   27717 system_pods.go:89] "coredns-7db6d8ff4d-ft2jl" [09e14815-b8e9-4b60-9b2c-c7d86cccb594] Running
	I0422 11:10:06.943148   27717 system_pods.go:89] "coredns-7db6d8ff4d-ht7jl" [c404a830-ddce-4c49-9e54-05d45871b4b0] Running
	I0422 11:10:06.943152   27717 system_pods.go:89] "etcd-ha-821265" [1a27ab5d-19af-49d9-8eb3-e50b7e2225a5] Running
	I0422 11:10:06.943156   27717 system_pods.go:89] "etcd-ha-821265-m02" [4ba0de26-81d6-423b-a5a4-9fd88c90ebdc] Running
	I0422 11:10:06.943160   27717 system_pods.go:89] "etcd-ha-821265-m03" [43ef0886-3651-4313-847d-ee6cd15ec411] Running
	I0422 11:10:06.943164   27717 system_pods.go:89] "kindnet-d8qgr" [ec965a08-bffa-46ef-8edf-a3f29cb9b474] Running
	I0422 11:10:06.943168   27717 system_pods.go:89] "kindnet-jm2pd" [0550a9db-b106-4ac4-9976-118d80927509] Running
	I0422 11:10:06.943172   27717 system_pods.go:89] "kindnet-qbq9z" [9751a17f-e26b-4ba8-81ce-077103c0aa1c] Running
	I0422 11:10:06.943176   27717 system_pods.go:89] "kube-apiserver-ha-821265" [1e20fb49-c54d-49fd-900b-38e347a52f9a] Running
	I0422 11:10:06.943180   27717 system_pods.go:89] "kube-apiserver-ha-821265-m02" [95616042-7a05-4fc3-a1ef-7fd56c8b3cd8] Running
	I0422 11:10:06.943183   27717 system_pods.go:89] "kube-apiserver-ha-821265-m03" [d2cd8a48-ff79-48cd-9096-99c240d07879] Running
	I0422 11:10:06.943187   27717 system_pods.go:89] "kube-controller-manager-ha-821265" [51933fc1-af7c-4fb0-b811-b6312f4b4d29] Running
	I0422 11:10:06.943193   27717 system_pods.go:89] "kube-controller-manager-ha-821265-m02" [4af2c432-4c7c-4f1f-98da-34af2648d7db] Running
	I0422 11:10:06.943200   27717 system_pods.go:89] "kube-controller-manager-ha-821265-m03" [06ea7b1f-409d-43a6-9493-bc4c24f3f536] Running
	I0422 11:10:06.943205   27717 system_pods.go:89] "kube-proxy-j2hpk" [3ebf4ab0-bc76-4f5c-916e-6b28a81dc031] Running
	I0422 11:10:06.943208   27717 system_pods.go:89] "kube-proxy-lmhp7" [45383871-e744-4764-823a-060a498ebc51] Running
	I0422 11:10:06.943212   27717 system_pods.go:89] "kube-proxy-w7r9d" [56a4f7fc-5ce0-4d77-b30f-9d39cded457c] Running
	I0422 11:10:06.943215   27717 system_pods.go:89] "kube-scheduler-ha-821265" [929e0c00-c49a-4b96-8f6a-7a84ae4f117c] Running
	I0422 11:10:06.943219   27717 system_pods.go:89] "kube-scheduler-ha-821265-m02" [589c30c7-d9df-4745-bdb3-87ae02ab2b67] Running
	I0422 11:10:06.943223   27717 system_pods.go:89] "kube-scheduler-ha-821265-m03" [d57674c8-cc46-4da5-9be1-01675f656b35] Running
	I0422 11:10:06.943227   27717 system_pods.go:89] "kube-vip-ha-821265" [9322f0ee-9e3e-4585-9388-44ccd1417371] Running
	I0422 11:10:06.943230   27717 system_pods.go:89] "kube-vip-ha-821265-m02" [466697de-7dbe-4e6c-be95-9463a9548cde] Running
	I0422 11:10:06.943234   27717 system_pods.go:89] "kube-vip-ha-821265-m03" [a4b446ae-5369-4b1e-bd82-be6fb4110c4c] Running
	I0422 11:10:06.943237   27717 system_pods.go:89] "storage-provisioner" [4b44da93-f3fa-49b7-a701-5ab7a430374f] Running
	I0422 11:10:06.943247   27717 system_pods.go:126] duration metric: took 211.344123ms to wait for k8s-apps to be running ...
	I0422 11:10:06.943254   27717 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 11:10:06.943298   27717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:10:06.960136   27717 system_svc.go:56] duration metric: took 16.870275ms WaitForService to wait for kubelet
	I0422 11:10:06.960172   27717 kubeadm.go:576] duration metric: took 20.177382765s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 11:10:06.960195   27717 node_conditions.go:102] verifying NodePressure condition ...
	I0422 11:10:07.128853   27717 request.go:629] Waited for 168.556002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes
	I0422 11:10:07.128909   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes
	I0422 11:10:07.128913   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:07.128920   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:07.128924   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:07.134203   27717 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0422 11:10:07.136104   27717 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 11:10:07.136122   27717 node_conditions.go:123] node cpu capacity is 2
	I0422 11:10:07.136131   27717 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 11:10:07.136135   27717 node_conditions.go:123] node cpu capacity is 2
	I0422 11:10:07.136138   27717 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 11:10:07.136141   27717 node_conditions.go:123] node cpu capacity is 2
	I0422 11:10:07.136145   27717 node_conditions.go:105] duration metric: took 175.945498ms to run NodePressure ...
	I0422 11:10:07.136156   27717 start.go:240] waiting for startup goroutines ...
	I0422 11:10:07.136173   27717 start.go:254] writing updated cluster config ...
	I0422 11:10:07.136460   27717 ssh_runner.go:195] Run: rm -f paused
	I0422 11:10:07.188977   27717 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0422 11:10:07.191080   27717 out.go:177] * Done! kubectl is now configured to use "ha-821265" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 22 11:13:39 ha-821265 crio[676]: time="2024-04-22 11:13:39.618322228Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713784419618297771,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c2610ccd-cede-4048-86a6-29c9e165894a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:13:39 ha-821265 crio[676]: time="2024-04-22 11:13:39.619475247Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=755fbc1e-a0f9-4c69-8807-317d9fd378d5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:13:39 ha-821265 crio[676]: time="2024-04-22 11:13:39.619529817Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=755fbc1e-a0f9-4c69-8807-317d9fd378d5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:13:39 ha-821265 crio[676]: time="2024-04-22 11:13:39.619819592Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f9e45e23c690bb79c7fd65070b3188b60b1c0041e0955b10386851453d93e8c2,PodSandboxId:82d54024bc68a08eee3c2cc0b18e7fb33cd099191b5f7459c47109f97a3f7592,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713784211175253270,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b4r5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1670d513-9071-4ee0-ae1b-7600c98019b8,},Annotations:map[string]string{io.kubernetes.container.hash: 5113ac6f,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28dbe3373b660061d706be023eb9515318a583f0f1fb735faf35e2fffc13f391,PodSandboxId:126db08ea55aca85342e8b7f3c944b3e420d06d55410be6b5b8c83ed8aaea027,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713784060436897502,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ht7jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c404a830-ddce-4c49-9e54-05d45871b4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1d0fb98b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:609e2855f754ce317a2cad749d16f773349ffc248725fb9417b473c9be8df139,PodSandboxId:84aaf42f76a8a064784395ee92d65a6be9d6ddc96fb911530ab4ab1c12faefa1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713784060349824691,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ft2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
09e14815-b8e9-4b60-9b2c-c7d86cccb594,},Annotations:map[string]string{io.kubernetes.container.hash: f5070328,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60306e6c18db97251960e340b26fd7591b71b65493a6e0603cccec3458948a44,PodSandboxId:b2f58af56b111bfad58560278e986cc2852b5ea20e89eb68900084ce537ba0be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1713784060256832706,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b44da93-f3fa-49b7-a701-5ab7a430374f,},Annotations:map[string]string{io.kubernetes.container.hash: 97b65705,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68514e3b402ea6cc11c51909fb9a2918a4580e62c5d019c9280d5fd40c8408cf,PodSandboxId:5694d3bdc4521fd36b2ea53baa3bd587487c1067d997f850c02dbe873a1776c7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17137840
58198287836,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qbq9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751a17f-e26b-4ba8-81ce-077103c0aa1c,},Annotations:map[string]string{io.kubernetes.container.hash: f5b26a55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f43ea569f86c6da0221ad11b84975f3fab68f9999821a395b05d1afb49f5269,PodSandboxId:626e64c737b2d764452e83cdf097ca6fc3248d79c58ccd5a488c8986fdfb101d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713784057949961038,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7r9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a4f7fc-5ce0-4d77-b30f-9d39cded457c,},Annotations:map[string]string{io.kubernetes.container.hash: bf585fa1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26ec191f8bcbef49468ef3d9b903de2da840c90478ee97540859b8f37f581f1,PodSandboxId:c0bfe906cafdccf860bd19ca9d4e03e86c477589df6194923b8485f838400aad,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713784038374193381,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a9d642b5b95959b9f509e42995bd869,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3935bd9c893d367d1a96bbfe83c0b1d50125bfe3272fe02c29b282d1d35de5,PodSandboxId:68a372e9f954bec85212f490bbd41d4da504f0947a8f1e065b8dc63d7cf5db88,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713784035610618080,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d47cc377f7ae04e53a8145721f1411a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:652741477fa90fca19fc111b1191a6acd0e2edcee141e389e5fd84f6018ec38e,PodSandboxId:e36b4c8b43c66a7d4a5f4c59ce3a0900d5545b5ef014af353b925a642266dc96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713784035573890703,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7e7ddac3eb004675c7add1d1e064dc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cbf52d94248bdbe7ca0e2622c441a457f4747f2d8e8969d25f7b6e629e1b566,PodSandboxId:9de13b553c43b35e9aa30be717e083ac22af034f154d3238f9af3b74b9cfa0e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713784035468971146,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2b58b303a812e19616ac42b0b60aae,},Annotations:map[string]string{io.kubernetes.container.hash: 4b37b5d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49f85435f20c5e5f43a285a09c2344ab0eb98efa1efcef40308fec44a77803,PodSandboxId:f773251009c17f15bd2065d44e9976fe2579a48750872b77f082f3b37a1a5747,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713784035389146521,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-821265,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68bde0d14316a4c3a901fddeacfd54a,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9fd198,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=755fbc1e-a0f9-4c69-8807-317d9fd378d5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:13:39 ha-821265 crio[676]: time="2024-04-22 11:13:39.663867007Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f545e53a-ad3b-471f-9357-514210de2e5c name=/runtime.v1.RuntimeService/Version
	Apr 22 11:13:39 ha-821265 crio[676]: time="2024-04-22 11:13:39.663942314Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f545e53a-ad3b-471f-9357-514210de2e5c name=/runtime.v1.RuntimeService/Version
	Apr 22 11:13:39 ha-821265 crio[676]: time="2024-04-22 11:13:39.668389090Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9b88935e-4989-4cc0-923d-fe72d75f0c6e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:13:39 ha-821265 crio[676]: time="2024-04-22 11:13:39.669016840Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713784419668987161,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b88935e-4989-4cc0-923d-fe72d75f0c6e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:13:39 ha-821265 crio[676]: time="2024-04-22 11:13:39.670184605Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a24d05a-06ba-4fad-9228-b0ce182f6b7b name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:13:39 ha-821265 crio[676]: time="2024-04-22 11:13:39.670240088Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5a24d05a-06ba-4fad-9228-b0ce182f6b7b name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:13:39 ha-821265 crio[676]: time="2024-04-22 11:13:39.670751834Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f9e45e23c690bb79c7fd65070b3188b60b1c0041e0955b10386851453d93e8c2,PodSandboxId:82d54024bc68a08eee3c2cc0b18e7fb33cd099191b5f7459c47109f97a3f7592,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713784211175253270,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b4r5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1670d513-9071-4ee0-ae1b-7600c98019b8,},Annotations:map[string]string{io.kubernetes.container.hash: 5113ac6f,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28dbe3373b660061d706be023eb9515318a583f0f1fb735faf35e2fffc13f391,PodSandboxId:126db08ea55aca85342e8b7f3c944b3e420d06d55410be6b5b8c83ed8aaea027,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713784060436897502,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ht7jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c404a830-ddce-4c49-9e54-05d45871b4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1d0fb98b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:609e2855f754ce317a2cad749d16f773349ffc248725fb9417b473c9be8df139,PodSandboxId:84aaf42f76a8a064784395ee92d65a6be9d6ddc96fb911530ab4ab1c12faefa1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713784060349824691,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ft2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
09e14815-b8e9-4b60-9b2c-c7d86cccb594,},Annotations:map[string]string{io.kubernetes.container.hash: f5070328,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60306e6c18db97251960e340b26fd7591b71b65493a6e0603cccec3458948a44,PodSandboxId:b2f58af56b111bfad58560278e986cc2852b5ea20e89eb68900084ce537ba0be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1713784060256832706,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b44da93-f3fa-49b7-a701-5ab7a430374f,},Annotations:map[string]string{io.kubernetes.container.hash: 97b65705,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68514e3b402ea6cc11c51909fb9a2918a4580e62c5d019c9280d5fd40c8408cf,PodSandboxId:5694d3bdc4521fd36b2ea53baa3bd587487c1067d997f850c02dbe873a1776c7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17137840
58198287836,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qbq9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751a17f-e26b-4ba8-81ce-077103c0aa1c,},Annotations:map[string]string{io.kubernetes.container.hash: f5b26a55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f43ea569f86c6da0221ad11b84975f3fab68f9999821a395b05d1afb49f5269,PodSandboxId:626e64c737b2d764452e83cdf097ca6fc3248d79c58ccd5a488c8986fdfb101d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713784057949961038,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7r9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a4f7fc-5ce0-4d77-b30f-9d39cded457c,},Annotations:map[string]string{io.kubernetes.container.hash: bf585fa1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26ec191f8bcbef49468ef3d9b903de2da840c90478ee97540859b8f37f581f1,PodSandboxId:c0bfe906cafdccf860bd19ca9d4e03e86c477589df6194923b8485f838400aad,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713784038374193381,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a9d642b5b95959b9f509e42995bd869,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3935bd9c893d367d1a96bbfe83c0b1d50125bfe3272fe02c29b282d1d35de5,PodSandboxId:68a372e9f954bec85212f490bbd41d4da504f0947a8f1e065b8dc63d7cf5db88,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713784035610618080,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d47cc377f7ae04e53a8145721f1411a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:652741477fa90fca19fc111b1191a6acd0e2edcee141e389e5fd84f6018ec38e,PodSandboxId:e36b4c8b43c66a7d4a5f4c59ce3a0900d5545b5ef014af353b925a642266dc96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713784035573890703,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7e7ddac3eb004675c7add1d1e064dc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cbf52d94248bdbe7ca0e2622c441a457f4747f2d8e8969d25f7b6e629e1b566,PodSandboxId:9de13b553c43b35e9aa30be717e083ac22af034f154d3238f9af3b74b9cfa0e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713784035468971146,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2b58b303a812e19616ac42b0b60aae,},Annotations:map[string]string{io.kubernetes.container.hash: 4b37b5d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49f85435f20c5e5f43a285a09c2344ab0eb98efa1efcef40308fec44a77803,PodSandboxId:f773251009c17f15bd2065d44e9976fe2579a48750872b77f082f3b37a1a5747,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713784035389146521,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-821265,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68bde0d14316a4c3a901fddeacfd54a,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9fd198,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5a24d05a-06ba-4fad-9228-b0ce182f6b7b name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:13:39 ha-821265 crio[676]: time="2024-04-22 11:13:39.715656758Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=69d8ccb6-e8d8-483f-b205-50c38b8eca19 name=/runtime.v1.RuntimeService/Version
	Apr 22 11:13:39 ha-821265 crio[676]: time="2024-04-22 11:13:39.715734193Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=69d8ccb6-e8d8-483f-b205-50c38b8eca19 name=/runtime.v1.RuntimeService/Version
	Apr 22 11:13:39 ha-821265 crio[676]: time="2024-04-22 11:13:39.717524822Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=98f0b4c9-02ac-447b-a4a0-ecea1d79997b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:13:39 ha-821265 crio[676]: time="2024-04-22 11:13:39.718356918Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713784419718332041,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=98f0b4c9-02ac-447b-a4a0-ecea1d79997b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:13:39 ha-821265 crio[676]: time="2024-04-22 11:13:39.719080187Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c83bc714-83e6-4d17-a3c8-f538be8f20a8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:13:39 ha-821265 crio[676]: time="2024-04-22 11:13:39.719164328Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c83bc714-83e6-4d17-a3c8-f538be8f20a8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:13:39 ha-821265 crio[676]: time="2024-04-22 11:13:39.719388583Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f9e45e23c690bb79c7fd65070b3188b60b1c0041e0955b10386851453d93e8c2,PodSandboxId:82d54024bc68a08eee3c2cc0b18e7fb33cd099191b5f7459c47109f97a3f7592,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713784211175253270,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b4r5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1670d513-9071-4ee0-ae1b-7600c98019b8,},Annotations:map[string]string{io.kubernetes.container.hash: 5113ac6f,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28dbe3373b660061d706be023eb9515318a583f0f1fb735faf35e2fffc13f391,PodSandboxId:126db08ea55aca85342e8b7f3c944b3e420d06d55410be6b5b8c83ed8aaea027,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713784060436897502,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ht7jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c404a830-ddce-4c49-9e54-05d45871b4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1d0fb98b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:609e2855f754ce317a2cad749d16f773349ffc248725fb9417b473c9be8df139,PodSandboxId:84aaf42f76a8a064784395ee92d65a6be9d6ddc96fb911530ab4ab1c12faefa1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713784060349824691,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ft2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
09e14815-b8e9-4b60-9b2c-c7d86cccb594,},Annotations:map[string]string{io.kubernetes.container.hash: f5070328,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60306e6c18db97251960e340b26fd7591b71b65493a6e0603cccec3458948a44,PodSandboxId:b2f58af56b111bfad58560278e986cc2852b5ea20e89eb68900084ce537ba0be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1713784060256832706,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b44da93-f3fa-49b7-a701-5ab7a430374f,},Annotations:map[string]string{io.kubernetes.container.hash: 97b65705,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68514e3b402ea6cc11c51909fb9a2918a4580e62c5d019c9280d5fd40c8408cf,PodSandboxId:5694d3bdc4521fd36b2ea53baa3bd587487c1067d997f850c02dbe873a1776c7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17137840
58198287836,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qbq9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751a17f-e26b-4ba8-81ce-077103c0aa1c,},Annotations:map[string]string{io.kubernetes.container.hash: f5b26a55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f43ea569f86c6da0221ad11b84975f3fab68f9999821a395b05d1afb49f5269,PodSandboxId:626e64c737b2d764452e83cdf097ca6fc3248d79c58ccd5a488c8986fdfb101d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713784057949961038,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7r9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a4f7fc-5ce0-4d77-b30f-9d39cded457c,},Annotations:map[string]string{io.kubernetes.container.hash: bf585fa1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26ec191f8bcbef49468ef3d9b903de2da840c90478ee97540859b8f37f581f1,PodSandboxId:c0bfe906cafdccf860bd19ca9d4e03e86c477589df6194923b8485f838400aad,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713784038374193381,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a9d642b5b95959b9f509e42995bd869,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3935bd9c893d367d1a96bbfe83c0b1d50125bfe3272fe02c29b282d1d35de5,PodSandboxId:68a372e9f954bec85212f490bbd41d4da504f0947a8f1e065b8dc63d7cf5db88,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713784035610618080,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d47cc377f7ae04e53a8145721f1411a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:652741477fa90fca19fc111b1191a6acd0e2edcee141e389e5fd84f6018ec38e,PodSandboxId:e36b4c8b43c66a7d4a5f4c59ce3a0900d5545b5ef014af353b925a642266dc96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713784035573890703,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7e7ddac3eb004675c7add1d1e064dc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cbf52d94248bdbe7ca0e2622c441a457f4747f2d8e8969d25f7b6e629e1b566,PodSandboxId:9de13b553c43b35e9aa30be717e083ac22af034f154d3238f9af3b74b9cfa0e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713784035468971146,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2b58b303a812e19616ac42b0b60aae,},Annotations:map[string]string{io.kubernetes.container.hash: 4b37b5d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49f85435f20c5e5f43a285a09c2344ab0eb98efa1efcef40308fec44a77803,PodSandboxId:f773251009c17f15bd2065d44e9976fe2579a48750872b77f082f3b37a1a5747,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713784035389146521,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-821265,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68bde0d14316a4c3a901fddeacfd54a,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9fd198,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c83bc714-83e6-4d17-a3c8-f538be8f20a8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:13:39 ha-821265 crio[676]: time="2024-04-22 11:13:39.765016484Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e45a40e7-b130-403d-85a5-9e03d9caf47f name=/runtime.v1.RuntimeService/Version
	Apr 22 11:13:39 ha-821265 crio[676]: time="2024-04-22 11:13:39.765120226Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e45a40e7-b130-403d-85a5-9e03d9caf47f name=/runtime.v1.RuntimeService/Version
	Apr 22 11:13:39 ha-821265 crio[676]: time="2024-04-22 11:13:39.766129869Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d23ec59b-99e5-4093-a389-4b47a37f4119 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:13:39 ha-821265 crio[676]: time="2024-04-22 11:13:39.766983519Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713784419766957766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d23ec59b-99e5-4093-a389-4b47a37f4119 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:13:39 ha-821265 crio[676]: time="2024-04-22 11:13:39.767631531Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=636aa514-82fd-432d-80fc-488e7161a918 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:13:39 ha-821265 crio[676]: time="2024-04-22 11:13:39.767755427Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=636aa514-82fd-432d-80fc-488e7161a918 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:13:39 ha-821265 crio[676]: time="2024-04-22 11:13:39.768022043Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f9e45e23c690bb79c7fd65070b3188b60b1c0041e0955b10386851453d93e8c2,PodSandboxId:82d54024bc68a08eee3c2cc0b18e7fb33cd099191b5f7459c47109f97a3f7592,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713784211175253270,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b4r5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1670d513-9071-4ee0-ae1b-7600c98019b8,},Annotations:map[string]string{io.kubernetes.container.hash: 5113ac6f,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28dbe3373b660061d706be023eb9515318a583f0f1fb735faf35e2fffc13f391,PodSandboxId:126db08ea55aca85342e8b7f3c944b3e420d06d55410be6b5b8c83ed8aaea027,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713784060436897502,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ht7jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c404a830-ddce-4c49-9e54-05d45871b4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1d0fb98b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:609e2855f754ce317a2cad749d16f773349ffc248725fb9417b473c9be8df139,PodSandboxId:84aaf42f76a8a064784395ee92d65a6be9d6ddc96fb911530ab4ab1c12faefa1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713784060349824691,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ft2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
09e14815-b8e9-4b60-9b2c-c7d86cccb594,},Annotations:map[string]string{io.kubernetes.container.hash: f5070328,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60306e6c18db97251960e340b26fd7591b71b65493a6e0603cccec3458948a44,PodSandboxId:b2f58af56b111bfad58560278e986cc2852b5ea20e89eb68900084ce537ba0be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1713784060256832706,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b44da93-f3fa-49b7-a701-5ab7a430374f,},Annotations:map[string]string{io.kubernetes.container.hash: 97b65705,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68514e3b402ea6cc11c51909fb9a2918a4580e62c5d019c9280d5fd40c8408cf,PodSandboxId:5694d3bdc4521fd36b2ea53baa3bd587487c1067d997f850c02dbe873a1776c7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17137840
58198287836,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qbq9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751a17f-e26b-4ba8-81ce-077103c0aa1c,},Annotations:map[string]string{io.kubernetes.container.hash: f5b26a55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f43ea569f86c6da0221ad11b84975f3fab68f9999821a395b05d1afb49f5269,PodSandboxId:626e64c737b2d764452e83cdf097ca6fc3248d79c58ccd5a488c8986fdfb101d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713784057949961038,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7r9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a4f7fc-5ce0-4d77-b30f-9d39cded457c,},Annotations:map[string]string{io.kubernetes.container.hash: bf585fa1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26ec191f8bcbef49468ef3d9b903de2da840c90478ee97540859b8f37f581f1,PodSandboxId:c0bfe906cafdccf860bd19ca9d4e03e86c477589df6194923b8485f838400aad,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713784038374193381,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a9d642b5b95959b9f509e42995bd869,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3935bd9c893d367d1a96bbfe83c0b1d50125bfe3272fe02c29b282d1d35de5,PodSandboxId:68a372e9f954bec85212f490bbd41d4da504f0947a8f1e065b8dc63d7cf5db88,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713784035610618080,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d47cc377f7ae04e53a8145721f1411a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:652741477fa90fca19fc111b1191a6acd0e2edcee141e389e5fd84f6018ec38e,PodSandboxId:e36b4c8b43c66a7d4a5f4c59ce3a0900d5545b5ef014af353b925a642266dc96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713784035573890703,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7e7ddac3eb004675c7add1d1e064dc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cbf52d94248bdbe7ca0e2622c441a457f4747f2d8e8969d25f7b6e629e1b566,PodSandboxId:9de13b553c43b35e9aa30be717e083ac22af034f154d3238f9af3b74b9cfa0e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713784035468971146,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2b58b303a812e19616ac42b0b60aae,},Annotations:map[string]string{io.kubernetes.container.hash: 4b37b5d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49f85435f20c5e5f43a285a09c2344ab0eb98efa1efcef40308fec44a77803,PodSandboxId:f773251009c17f15bd2065d44e9976fe2579a48750872b77f082f3b37a1a5747,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713784035389146521,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-821265,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68bde0d14316a4c3a901fddeacfd54a,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9fd198,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=636aa514-82fd-432d-80fc-488e7161a918 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f9e45e23c690b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   82d54024bc68a       busybox-fc5497c4f-b4r5w
	28dbe3373b660       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   126db08ea55ac       coredns-7db6d8ff4d-ht7jl
	609e2855f754c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   84aaf42f76a8a       coredns-7db6d8ff4d-ft2jl
	60306e6c18db9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   b2f58af56b111       storage-provisioner
	68514e3b402ea       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      6 minutes ago       Running             kindnet-cni               0                   5694d3bdc4521       kindnet-qbq9z
	1f43ea569f86c       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      6 minutes ago       Running             kube-proxy                0                   626e64c737b2d       kube-proxy-w7r9d
	a26ec191f8bcb       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     6 minutes ago       Running             kube-vip                  0                   c0bfe906cafdc       kube-vip-ha-821265
	2b3935bd9c893       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      6 minutes ago       Running             kube-scheduler            0                   68a372e9f954b       kube-scheduler-ha-821265
	652741477fa90       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      6 minutes ago       Running             kube-controller-manager   0                   e36b4c8b43c66       kube-controller-manager-ha-821265
	7cbf52d94248b       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      6 minutes ago       Running             kube-apiserver            0                   9de13b553c43b       kube-apiserver-ha-821265
	ba49f85435f20       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   f773251009c17       etcd-ha-821265
	
	
	==> coredns [28dbe3373b660061d706be023eb9515318a583f0f1fb735faf35e2fffc13f391] <==
	[INFO] 10.244.0.4:44847 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000140809s
	[INFO] 10.244.0.4:35521 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000203677s
	[INFO] 10.244.1.2:55855 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000292202s
	[INFO] 10.244.1.2:40525 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001858622s
	[INFO] 10.244.1.2:43358 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000160709s
	[INFO] 10.244.1.2:55629 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000195731s
	[INFO] 10.244.1.2:44290 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121655s
	[INFO] 10.244.1.2:57358 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121564s
	[INFO] 10.244.2.2:59048 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159182s
	[INFO] 10.244.2.2:35567 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001954066s
	[INFO] 10.244.2.2:51799 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000221645s
	[INFO] 10.244.2.2:34300 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001398818s
	[INFO] 10.244.2.2:44605 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141089s
	[INFO] 10.244.2.2:60699 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114317s
	[INFO] 10.244.2.2:47652 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110384s
	[INFO] 10.244.0.4:58761 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147629s
	[INFO] 10.244.0.4:45372 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000061515s
	[INFO] 10.244.1.2:39990 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000301231s
	[INFO] 10.244.2.2:38384 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218658s
	[INFO] 10.244.2.2:42087 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096499s
	[INFO] 10.244.2.2:46418 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091631s
	[INFO] 10.244.0.4:38705 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000140004s
	[INFO] 10.244.2.2:47355 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124377s
	[INFO] 10.244.2.2:41383 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000176022s
	[INFO] 10.244.2.2:36036 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000263019s
	
	
	==> coredns [609e2855f754ce317a2cad749d16f773349ffc248725fb9417b473c9be8df139] <==
	[INFO] 127.0.0.1:56528 - 52490 "HINFO IN 6584900057141735052.5629882702753792788. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017831721s
	[INFO] 10.244.0.4:39057 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000647552s
	[INFO] 10.244.0.4:33128 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.014559084s
	[INFO] 10.244.1.2:55844 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178035s
	[INFO] 10.244.2.2:56677 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000145596s
	[INFO] 10.244.2.2:55471 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000502508s
	[INFO] 10.244.0.4:48892 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000180363s
	[INFO] 10.244.0.4:39631 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00015636s
	[INFO] 10.244.1.2:41139 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001436054s
	[INFO] 10.244.1.2:50039 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000238831s
	[INFO] 10.244.2.2:49593 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099929s
	[INFO] 10.244.0.4:33617 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078273s
	[INFO] 10.244.0.4:35287 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000154317s
	[INFO] 10.244.1.2:52682 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133804s
	[INFO] 10.244.1.2:40594 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130792s
	[INFO] 10.244.1.2:39775 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009509s
	[INFO] 10.244.2.2:55863 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00021768s
	[INFO] 10.244.0.4:36835 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092568s
	[INFO] 10.244.0.4:53708 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00016929s
	[INFO] 10.244.0.4:44024 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000203916s
	[INFO] 10.244.1.2:50167 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158884s
	[INFO] 10.244.1.2:49103 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000120664s
	[INFO] 10.244.1.2:44739 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000212444s
	[INFO] 10.244.1.2:43569 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000207516s
	[INFO] 10.244.2.2:48876 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000228682s
	
	
	==> describe nodes <==
	Name:               ha-821265
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-821265
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437
	                    minikube.k8s.io/name=ha-821265
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_22T11_07_22_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 11:07:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-821265
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 11:13:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 11:10:25 +0000   Mon, 22 Apr 2024 11:07:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 11:10:25 +0000   Mon, 22 Apr 2024 11:07:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 11:10:25 +0000   Mon, 22 Apr 2024 11:07:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 11:10:25 +0000   Mon, 22 Apr 2024 11:07:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.150
	  Hostname:    ha-821265
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e3708e3d49144fe9a219d30c45824055
	  System UUID:                e3708e3d-4914-4fe9-a219-d30c45824055
	  Boot ID:                    59d6bf31-99bc-4f8f-942a-1d3384515d3f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-b4r5w              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 coredns-7db6d8ff4d-ft2jl             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m3s
	  kube-system                 coredns-7db6d8ff4d-ht7jl             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m3s
	  kube-system                 etcd-ha-821265                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m19s
	  kube-system                 kindnet-qbq9z                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m3s
	  kube-system                 kube-apiserver-ha-821265             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	  kube-system                 kube-controller-manager-ha-821265    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m21s
	  kube-system                 kube-proxy-w7r9d                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 kube-scheduler-ha-821265             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	  kube-system                 kube-vip-ha-821265                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m21s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m1s                   kube-proxy       
	  Normal  NodeHasSufficientPID     6m26s (x7 over 6m26s)  kubelet          Node ha-821265 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m26s (x8 over 6m26s)  kubelet          Node ha-821265 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m26s (x8 over 6m26s)  kubelet          Node ha-821265 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m19s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m19s                  kubelet          Node ha-821265 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m19s                  kubelet          Node ha-821265 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m19s                  kubelet          Node ha-821265 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m4s                   node-controller  Node ha-821265 event: Registered Node ha-821265 in Controller
	  Normal  NodeReady                6m1s                   kubelet          Node ha-821265 status is now: NodeReady
	  Normal  RegisteredNode           4m53s                  node-controller  Node ha-821265 event: Registered Node ha-821265 in Controller
	  Normal  RegisteredNode           3m39s                  node-controller  Node ha-821265 event: Registered Node ha-821265 in Controller
	
	
	Name:               ha-821265-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-821265-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437
	                    minikube.k8s.io/name=ha-821265
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_22T11_08_32_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 11:08:28 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-821265-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 11:11:12 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 22 Apr 2024 11:10:31 +0000   Mon, 22 Apr 2024 11:11:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 22 Apr 2024 11:10:31 +0000   Mon, 22 Apr 2024 11:11:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 22 Apr 2024 11:10:31 +0000   Mon, 22 Apr 2024 11:11:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 22 Apr 2024 11:10:31 +0000   Mon, 22 Apr 2024 11:11:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.39
	  Hostname:    ha-821265-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee4ee33670c847d689ce31a8a149631b
	  System UUID:                ee4ee336-70c8-47d6-89ce-31a8a149631b
	  Boot ID:                    ec814c8f-fad1-48eb-83d3-5828e2f6775b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ft78k                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 etcd-ha-821265-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m10s
	  kube-system                 kindnet-jm2pd                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m12s
	  kube-system                 kube-apiserver-ha-821265-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m10s
	  kube-system                 kube-controller-manager-ha-821265-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m10s
	  kube-system                 kube-proxy-j2hpk                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  kube-system                 kube-scheduler-ha-821265-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m10s
	  kube-system                 kube-vip-ha-821265-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  5m12s (x8 over 5m12s)  kubelet          Node ha-821265-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m12s (x8 over 5m12s)  kubelet          Node ha-821265-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m12s (x7 over 5m12s)  kubelet          Node ha-821265-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m9s                   node-controller  Node ha-821265-m02 event: Registered Node ha-821265-m02 in Controller
	  Normal  RegisteredNode           4m53s                  node-controller  Node ha-821265-m02 event: Registered Node ha-821265-m02 in Controller
	  Normal  RegisteredNode           3m39s                  node-controller  Node ha-821265-m02 event: Registered Node ha-821265-m02 in Controller
	  Normal  NodeNotReady             104s                   node-controller  Node ha-821265-m02 status is now: NodeNotReady
	
	
	Name:               ha-821265-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-821265-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437
	                    minikube.k8s.io/name=ha-821265
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_22T11_09_46_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 11:09:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-821265-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 11:13:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 11:10:13 +0000   Mon, 22 Apr 2024 11:09:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 11:10:13 +0000   Mon, 22 Apr 2024 11:09:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 11:10:13 +0000   Mon, 22 Apr 2024 11:09:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 11:10:13 +0000   Mon, 22 Apr 2024 11:09:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.95
	  Hostname:    ha-821265-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fae8daa600b4453d8a90a572a44f23c8
	  System UUID:                fae8daa6-00b4-453d-8a90-a572a44f23c8
	  Boot ID:                    62e4e3f8-9bb3-4147-9a5d-9ce3b8996599
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-fzcrw                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 etcd-ha-821265-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m55s
	  kube-system                 kindnet-d8qgr                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m57s
	  kube-system                 kube-apiserver-ha-821265-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 kube-controller-manager-ha-821265-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  kube-system                 kube-proxy-lmhp7                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 kube-scheduler-ha-821265-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 kube-vip-ha-821265-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m51s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m57s (x8 over 3m57s)  kubelet          Node ha-821265-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m57s (x8 over 3m57s)  kubelet          Node ha-821265-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m57s (x7 over 3m57s)  kubelet          Node ha-821265-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m54s                  node-controller  Node ha-821265-m03 event: Registered Node ha-821265-m03 in Controller
	  Normal  RegisteredNode           3m53s                  node-controller  Node ha-821265-m03 event: Registered Node ha-821265-m03 in Controller
	  Normal  RegisteredNode           3m39s                  node-controller  Node ha-821265-m03 event: Registered Node ha-821265-m03 in Controller
	
	
	Name:               ha-821265-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-821265-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437
	                    minikube.k8s.io/name=ha-821265
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_22T11_10_47_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 11:10:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-821265-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 11:13:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 11:11:17 +0000   Mon, 22 Apr 2024 11:10:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 11:11:17 +0000   Mon, 22 Apr 2024 11:10:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 11:11:17 +0000   Mon, 22 Apr 2024 11:10:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 11:11:17 +0000   Mon, 22 Apr 2024 11:10:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.252
	  Hostname:    ha-821265-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 dd9646c23a234a60a7a73b7377025a34
	  System UUID:                dd9646c2-3a23-4a60-a7a7-3b7377025a34
	  Boot ID:                    5cc549d7-73b1-4fa5-ab02-659fe0409704
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-gvgbm       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m54s
	  kube-system                 kube-proxy-hdvbv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m47s                  kube-proxy       
	  Normal  RegisteredNode           2m54s                  node-controller  Node ha-821265-m04 event: Registered Node ha-821265-m04 in Controller
	  Normal  NodeHasSufficientMemory  2m54s (x2 over 2m54s)  kubelet          Node ha-821265-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m54s (x2 over 2m54s)  kubelet          Node ha-821265-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m54s (x2 over 2m54s)  kubelet          Node ha-821265-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m53s                  node-controller  Node ha-821265-m04 event: Registered Node ha-821265-m04 in Controller
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-821265-m04 event: Registered Node ha-821265-m04 in Controller
	  Normal  NodeReady                2m43s                  kubelet          Node ha-821265-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr22 11:06] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053897] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043778] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.665964] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.570884] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.736860] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr22 11:07] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.062413] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064974] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.181323] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.148920] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.299663] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +4.930467] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +0.065860] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.137174] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.064357] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.162362] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.079557] kauditd_printk_skb: 79 callbacks suppressed
	[ +16.384158] kauditd_printk_skb: 21 callbacks suppressed
	[Apr22 11:08] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [ba49f85435f20c5e5f43a285a09c2344ab0eb98efa1efcef40308fec44a77803] <==
	{"level":"warn","ts":"2024-04-22T11:13:40.074045Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:13:40.082993Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:13:40.088668Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:13:40.09946Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:13:40.102043Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:13:40.111919Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:13:40.123139Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:13:40.131146Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:13:40.135265Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:13:40.139157Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:13:40.15186Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:13:40.159511Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:13:40.167309Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:13:40.170499Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:13:40.174225Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:13:40.182425Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:13:40.189993Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:13:40.197136Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:13:40.198929Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:13:40.203657Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:13:40.208055Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:13:40.216043Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:13:40.223057Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:13:40.23019Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:13:40.299621Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:13:40 up 6 min,  0 users,  load average: 0.28, 0.36, 0.19
	Linux ha-821265 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [68514e3b402ea6cc11c51909fb9a2918a4580e62c5d019c9280d5fd40c8408cf] <==
	I0422 11:13:10.059109       1 main.go:250] Node ha-821265-m04 has CIDR [10.244.3.0/24] 
	I0422 11:13:20.077406       1 main.go:223] Handling node with IPs: map[192.168.39.150:{}]
	I0422 11:13:20.078660       1 main.go:227] handling current node
	I0422 11:13:20.078844       1 main.go:223] Handling node with IPs: map[192.168.39.39:{}]
	I0422 11:13:20.078985       1 main.go:250] Node ha-821265-m02 has CIDR [10.244.1.0/24] 
	I0422 11:13:20.079294       1 main.go:223] Handling node with IPs: map[192.168.39.95:{}]
	I0422 11:13:20.079364       1 main.go:250] Node ha-821265-m03 has CIDR [10.244.2.0/24] 
	I0422 11:13:20.080826       1 main.go:223] Handling node with IPs: map[192.168.39.252:{}]
	I0422 11:13:20.081012       1 main.go:250] Node ha-821265-m04 has CIDR [10.244.3.0/24] 
	I0422 11:13:30.091437       1 main.go:223] Handling node with IPs: map[192.168.39.150:{}]
	I0422 11:13:30.091762       1 main.go:227] handling current node
	I0422 11:13:30.091832       1 main.go:223] Handling node with IPs: map[192.168.39.39:{}]
	I0422 11:13:30.091860       1 main.go:250] Node ha-821265-m02 has CIDR [10.244.1.0/24] 
	I0422 11:13:30.092005       1 main.go:223] Handling node with IPs: map[192.168.39.95:{}]
	I0422 11:13:30.092029       1 main.go:250] Node ha-821265-m03 has CIDR [10.244.2.0/24] 
	I0422 11:13:30.092115       1 main.go:223] Handling node with IPs: map[192.168.39.252:{}]
	I0422 11:13:30.092138       1 main.go:250] Node ha-821265-m04 has CIDR [10.244.3.0/24] 
	I0422 11:13:40.106242       1 main.go:223] Handling node with IPs: map[192.168.39.150:{}]
	I0422 11:13:40.106263       1 main.go:227] handling current node
	I0422 11:13:40.106273       1 main.go:223] Handling node with IPs: map[192.168.39.39:{}]
	I0422 11:13:40.106278       1 main.go:250] Node ha-821265-m02 has CIDR [10.244.1.0/24] 
	I0422 11:13:40.106372       1 main.go:223] Handling node with IPs: map[192.168.39.95:{}]
	I0422 11:13:40.106377       1 main.go:250] Node ha-821265-m03 has CIDR [10.244.2.0/24] 
	I0422 11:13:40.106423       1 main.go:223] Handling node with IPs: map[192.168.39.252:{}]
	I0422 11:13:40.106428       1 main.go:250] Node ha-821265-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [7cbf52d94248bdbe7ca0e2622c441a457f4747f2d8e8969d25f7b6e629e1b566] <==
	I0422 11:07:20.788343       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0422 11:07:20.795514       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.150]
	I0422 11:07:20.796931       1 controller.go:615] quota admission added evaluator for: endpoints
	I0422 11:07:20.802119       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0422 11:07:21.650107       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0422 11:07:21.659779       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0422 11:07:21.695484       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0422 11:07:21.719175       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0422 11:07:37.012414       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0422 11:07:37.258019       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0422 11:10:13.058982       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54190: use of closed network connection
	E0422 11:10:13.284020       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54218: use of closed network connection
	E0422 11:10:13.515062       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54242: use of closed network connection
	E0422 11:10:13.744691       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54266: use of closed network connection
	E0422 11:10:13.958482       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54290: use of closed network connection
	E0422 11:10:14.166197       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54306: use of closed network connection
	E0422 11:10:14.367684       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54318: use of closed network connection
	E0422 11:10:14.567281       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54334: use of closed network connection
	E0422 11:10:14.774490       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54348: use of closed network connection
	E0422 11:10:15.120609       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54372: use of closed network connection
	E0422 11:10:15.327311       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54398: use of closed network connection
	E0422 11:10:15.541803       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54424: use of closed network connection
	E0422 11:10:15.747359       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54456: use of closed network connection
	E0422 11:10:15.977207       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54488: use of closed network connection
	E0422 11:10:16.174144       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54506: use of closed network connection
	
	
	==> kube-controller-manager [652741477fa90fca19fc111b1191a6acd0e2edcee141e389e5fd84f6018ec38e] <==
	I0422 11:09:43.274704       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-821265-m03\" does not exist"
	I0422 11:09:43.301064       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-821265-m03" podCIDRs=["10.244.2.0/24"]
	I0422 11:09:46.349739       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-821265-m03"
	I0422 11:10:08.190638       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.178678ms"
	I0422 11:10:08.227463       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.618097ms"
	I0422 11:10:08.337429       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="109.898774ms"
	I0422 11:10:08.582325       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="244.823805ms"
	E0422 11:10:08.582375       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0422 11:10:08.582539       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="124.399µs"
	I0422 11:10:08.600168       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="79.846µs"
	I0422 11:10:08.788992       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.683µs"
	I0422 11:10:11.450292       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.644186ms"
	I0422 11:10:11.450468       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.457µs"
	I0422 11:10:12.315500       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.427625ms"
	I0422 11:10:12.315876       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.376µs"
	I0422 11:10:12.510138       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.283117ms"
	I0422 11:10:12.510681       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.493µs"
	E0422 11:10:46.325269       1 certificate_controller.go:146] Sync csr-zbr6p failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-zbr6p": the object has been modified; please apply your changes to the latest version and try again
	I0422 11:10:46.598067       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-821265-m04\" does not exist"
	I0422 11:10:46.640138       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-821265-m04" podCIDRs=["10.244.3.0/24"]
	I0422 11:10:51.401049       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-821265-m04"
	I0422 11:10:57.926661       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-821265-m04"
	I0422 11:11:56.451193       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-821265-m04"
	I0422 11:11:56.567400       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.545461ms"
	I0422 11:11:56.568852       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.414µs"
	
	
	==> kube-proxy [1f43ea569f86c6da0221ad11b84975f3fab68f9999821a395b05d1afb49f5269] <==
	I0422 11:07:38.328400       1 server_linux.go:69] "Using iptables proxy"
	I0422 11:07:38.341241       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.150"]
	I0422 11:07:38.416689       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 11:07:38.416754       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 11:07:38.416773       1 server_linux.go:165] "Using iptables Proxier"
	I0422 11:07:38.420819       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 11:07:38.421063       1 server.go:872] "Version info" version="v1.30.0"
	I0422 11:07:38.421099       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 11:07:38.422051       1 config.go:192] "Starting service config controller"
	I0422 11:07:38.422060       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 11:07:38.422106       1 config.go:101] "Starting endpoint slice config controller"
	I0422 11:07:38.422112       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 11:07:38.423874       1 config.go:319] "Starting node config controller"
	I0422 11:07:38.423884       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 11:07:38.522915       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0422 11:07:38.522987       1 shared_informer.go:320] Caches are synced for service config
	I0422 11:07:38.524398       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2b3935bd9c893d367d1a96bbfe83c0b1d50125bfe3272fe02c29b282d1d35de5] <==
	I0422 11:07:21.619771       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0422 11:09:43.352816       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-d8qgr\": pod kindnet-d8qgr is already assigned to node \"ha-821265-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-d8qgr" node="ha-821265-m03"
	E0422 11:09:43.353000       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod ec965a08-bffa-46ef-8edf-a3f29cb9b474(kube-system/kindnet-d8qgr) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-d8qgr"
	E0422 11:09:43.353028       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-d8qgr\": pod kindnet-d8qgr is already assigned to node \"ha-821265-m03\"" pod="kube-system/kindnet-d8qgr"
	I0422 11:09:43.353079       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-d8qgr" node="ha-821265-m03"
	E0422 11:09:43.352787       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-lmhp7\": pod kube-proxy-lmhp7 is already assigned to node \"ha-821265-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-lmhp7" node="ha-821265-m03"
	E0422 11:09:43.359109       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 45383871-e744-4764-823a-060a498ebc51(kube-system/kube-proxy-lmhp7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-lmhp7"
	E0422 11:09:43.359136       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-lmhp7\": pod kube-proxy-lmhp7 is already assigned to node \"ha-821265-m03\"" pod="kube-system/kube-proxy-lmhp7"
	I0422 11:09:43.359158       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-lmhp7" node="ha-821265-m03"
	E0422 11:10:46.706330       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-wx4rp\": pod kube-proxy-wx4rp is already assigned to node \"ha-821265-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-wx4rp" node="ha-821265-m04"
	E0422 11:10:46.706533       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-wx4rp\": pod kube-proxy-wx4rp is already assigned to node \"ha-821265-m04\"" pod="kube-system/kube-proxy-wx4rp"
	E0422 11:10:46.708956       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-kfksf\": pod kindnet-kfksf is already assigned to node \"ha-821265-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-kfksf" node="ha-821265-m04"
	E0422 11:10:46.709079       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-kfksf\": pod kindnet-kfksf is already assigned to node \"ha-821265-m04\"" pod="kube-system/kindnet-kfksf"
	E0422 11:10:46.717414       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mkwbf\": pod kindnet-mkwbf is already assigned to node \"ha-821265-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-mkwbf" node="ha-821265-m04"
	E0422 11:10:46.717500       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 65fb25e8-6cff-49b8-902a-6415f2370faf(kube-system/kindnet-mkwbf) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-mkwbf"
	E0422 11:10:46.717532       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mkwbf\": pod kindnet-mkwbf is already assigned to node \"ha-821265-m04\"" pod="kube-system/kindnet-mkwbf"
	I0422 11:10:46.717622       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-mkwbf" node="ha-821265-m04"
	E0422 11:10:46.878843       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-gvgbm\": pod kindnet-gvgbm is already assigned to node \"ha-821265-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-gvgbm" node="ha-821265-m04"
	E0422 11:10:46.879083       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 2a514bff-6dea-4863-8d8a-620a7f77e011(kube-system/kindnet-gvgbm) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-gvgbm"
	E0422 11:10:46.879126       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-gvgbm\": pod kindnet-gvgbm is already assigned to node \"ha-821265-m04\"" pod="kube-system/kindnet-gvgbm"
	I0422 11:10:46.879172       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-gvgbm" node="ha-821265-m04"
	E0422 11:10:46.880623       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-lkrhg\": pod kube-proxy-lkrhg is already assigned to node \"ha-821265-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-lkrhg" node="ha-821265-m04"
	E0422 11:10:46.880696       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 1196fc23-a892-4e83-9cec-8e1a566a768a(kube-system/kube-proxy-lkrhg) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-lkrhg"
	E0422 11:10:46.880810       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-lkrhg\": pod kube-proxy-lkrhg is already assigned to node \"ha-821265-m04\"" pod="kube-system/kube-proxy-lkrhg"
	I0422 11:10:46.880880       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-lkrhg" node="ha-821265-m04"
	
	
	==> kubelet <==
	Apr 22 11:09:21 ha-821265 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 11:09:21 ha-821265 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 11:10:08 ha-821265 kubelet[1370]: I0422 11:10:08.191147    1370 topology_manager.go:215] "Topology Admit Handler" podUID="1670d513-9071-4ee0-ae1b-7600c98019b8" podNamespace="default" podName="busybox-fc5497c4f-b4r5w"
	Apr 22 11:10:08 ha-821265 kubelet[1370]: I0422 11:10:08.311491    1370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds459\" (UniqueName: \"kubernetes.io/projected/1670d513-9071-4ee0-ae1b-7600c98019b8-kube-api-access-ds459\") pod \"busybox-fc5497c4f-b4r5w\" (UID: \"1670d513-9071-4ee0-ae1b-7600c98019b8\") " pod="default/busybox-fc5497c4f-b4r5w"
	Apr 22 11:10:11 ha-821265 kubelet[1370]: I0422 11:10:11.411960    1370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-b4r5w" podStartSLOduration=0.983111459 podStartE2EDuration="3.411838477s" podCreationTimestamp="2024-04-22 11:10:08 +0000 UTC" firstStartedPulling="2024-04-22 11:10:08.729442 +0000 UTC m=+167.295390289" lastFinishedPulling="2024-04-22 11:10:11.158169011 +0000 UTC m=+169.724117307" observedRunningTime="2024-04-22 11:10:11.41102546 +0000 UTC m=+169.976973767" watchObservedRunningTime="2024-04-22 11:10:11.411838477 +0000 UTC m=+169.977786785"
	Apr 22 11:10:21 ha-821265 kubelet[1370]: E0422 11:10:21.619839    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 11:10:21 ha-821265 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 11:10:21 ha-821265 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 11:10:21 ha-821265 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 11:10:21 ha-821265 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 11:11:21 ha-821265 kubelet[1370]: E0422 11:11:21.620243    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 11:11:21 ha-821265 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 11:11:21 ha-821265 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 11:11:21 ha-821265 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 11:11:21 ha-821265 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 11:12:21 ha-821265 kubelet[1370]: E0422 11:12:21.623214    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 11:12:21 ha-821265 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 11:12:21 ha-821265 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 11:12:21 ha-821265 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 11:12:21 ha-821265 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 11:13:21 ha-821265 kubelet[1370]: E0422 11:13:21.619745    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 11:13:21 ha-821265 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 11:13:21 ha-821265 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 11:13:21 ha-821265 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 11:13:21 ha-821265 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-821265 -n ha-821265
helpers_test.go:261: (dbg) Run:  kubectl --context ha-821265 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (54.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-821265 status -v=7 --alsologtostderr: exit status 3 (3.202457397s)

                                                
                                                
-- stdout --
	ha-821265
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-821265-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-821265-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-821265-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 11:13:44.901428   32528 out.go:291] Setting OutFile to fd 1 ...
	I0422 11:13:44.901696   32528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:13:44.901712   32528 out.go:304] Setting ErrFile to fd 2...
	I0422 11:13:44.901718   32528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:13:44.901918   32528 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
	I0422 11:13:44.902082   32528 out.go:298] Setting JSON to false
	I0422 11:13:44.902102   32528 mustload.go:65] Loading cluster: ha-821265
	I0422 11:13:44.902231   32528 notify.go:220] Checking for updates...
	I0422 11:13:44.902590   32528 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:13:44.902611   32528 status.go:255] checking status of ha-821265 ...
	I0422 11:13:44.903127   32528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:44.903213   32528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:44.917506   32528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46319
	I0422 11:13:44.917949   32528 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:44.918473   32528 main.go:141] libmachine: Using API Version  1
	I0422 11:13:44.918496   32528 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:44.919008   32528 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:44.919275   32528 main.go:141] libmachine: (ha-821265) Calling .GetState
	I0422 11:13:44.921025   32528 status.go:330] ha-821265 host status = "Running" (err=<nil>)
	I0422 11:13:44.921048   32528 host.go:66] Checking if "ha-821265" exists ...
	I0422 11:13:44.921333   32528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:44.921377   32528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:44.938169   32528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41467
	I0422 11:13:44.938644   32528 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:44.939228   32528 main.go:141] libmachine: Using API Version  1
	I0422 11:13:44.939261   32528 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:44.939630   32528 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:44.939866   32528 main.go:141] libmachine: (ha-821265) Calling .GetIP
	I0422 11:13:44.943031   32528 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:13:44.943429   32528 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:13:44.943460   32528 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:13:44.943580   32528 host.go:66] Checking if "ha-821265" exists ...
	I0422 11:13:44.943931   32528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:44.943967   32528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:44.958620   32528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42649
	I0422 11:13:44.958995   32528 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:44.959440   32528 main.go:141] libmachine: Using API Version  1
	I0422 11:13:44.959466   32528 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:44.959776   32528 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:44.960000   32528 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:13:44.960249   32528 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:13:44.960269   32528 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:13:44.963095   32528 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:13:44.963563   32528 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:13:44.963603   32528 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:13:44.963737   32528 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:13:44.963902   32528 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:13:44.964055   32528 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:13:44.964234   32528 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:13:45.049391   32528 ssh_runner.go:195] Run: systemctl --version
	I0422 11:13:45.056093   32528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:13:45.072107   32528 kubeconfig.go:125] found "ha-821265" server: "https://192.168.39.254:8443"
	I0422 11:13:45.072131   32528 api_server.go:166] Checking apiserver status ...
	I0422 11:13:45.072163   32528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 11:13:45.087541   32528 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup
	W0422 11:13:45.098239   32528 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 11:13:45.098290   32528 ssh_runner.go:195] Run: ls
	I0422 11:13:45.103674   32528 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 11:13:45.108044   32528 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 11:13:45.108073   32528 status.go:422] ha-821265 apiserver status = Running (err=<nil>)
	I0422 11:13:45.108083   32528 status.go:257] ha-821265 status: &{Name:ha-821265 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 11:13:45.108107   32528 status.go:255] checking status of ha-821265-m02 ...
	I0422 11:13:45.108398   32528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:45.108430   32528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:45.123221   32528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33125
	I0422 11:13:45.123683   32528 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:45.124125   32528 main.go:141] libmachine: Using API Version  1
	I0422 11:13:45.124143   32528 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:45.124522   32528 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:45.124711   32528 main.go:141] libmachine: (ha-821265-m02) Calling .GetState
	I0422 11:13:45.126241   32528 status.go:330] ha-821265-m02 host status = "Running" (err=<nil>)
	I0422 11:13:45.126269   32528 host.go:66] Checking if "ha-821265-m02" exists ...
	I0422 11:13:45.126585   32528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:45.126629   32528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:45.141169   32528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35529
	I0422 11:13:45.141581   32528 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:45.141985   32528 main.go:141] libmachine: Using API Version  1
	I0422 11:13:45.142001   32528 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:45.142327   32528 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:45.142494   32528 main.go:141] libmachine: (ha-821265-m02) Calling .GetIP
	I0422 11:13:45.145403   32528 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:13:45.145805   32528 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:13:45.145827   32528 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:13:45.145987   32528 host.go:66] Checking if "ha-821265-m02" exists ...
	I0422 11:13:45.146269   32528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:45.146303   32528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:45.160391   32528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43785
	I0422 11:13:45.160747   32528 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:45.161264   32528 main.go:141] libmachine: Using API Version  1
	I0422 11:13:45.161286   32528 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:45.161605   32528 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:45.161807   32528 main.go:141] libmachine: (ha-821265-m02) Calling .DriverName
	I0422 11:13:45.162014   32528 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:13:45.162036   32528 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHHostname
	I0422 11:13:45.164614   32528 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:13:45.165149   32528 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:13:45.165169   32528 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:13:45.165342   32528 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHPort
	I0422 11:13:45.165505   32528 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:13:45.165650   32528 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHUsername
	I0422 11:13:45.165772   32528 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02/id_rsa Username:docker}
	W0422 11:13:47.689042   32528 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.39:22: connect: no route to host
	W0422 11:13:47.689153   32528 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	E0422 11:13:47.689170   32528 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	I0422 11:13:47.689179   32528 status.go:257] ha-821265-m02 status: &{Name:ha-821265-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0422 11:13:47.689195   32528 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	I0422 11:13:47.689202   32528 status.go:255] checking status of ha-821265-m03 ...
	I0422 11:13:47.689621   32528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:47.689676   32528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:47.704383   32528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45893
	I0422 11:13:47.704820   32528 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:47.705263   32528 main.go:141] libmachine: Using API Version  1
	I0422 11:13:47.705289   32528 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:47.705630   32528 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:47.705853   32528 main.go:141] libmachine: (ha-821265-m03) Calling .GetState
	I0422 11:13:47.707471   32528 status.go:330] ha-821265-m03 host status = "Running" (err=<nil>)
	I0422 11:13:47.707494   32528 host.go:66] Checking if "ha-821265-m03" exists ...
	I0422 11:13:47.707884   32528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:47.707924   32528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:47.724122   32528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37243
	I0422 11:13:47.724510   32528 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:47.725012   32528 main.go:141] libmachine: Using API Version  1
	I0422 11:13:47.725033   32528 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:47.725392   32528 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:47.725590   32528 main.go:141] libmachine: (ha-821265-m03) Calling .GetIP
	I0422 11:13:47.728344   32528 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:13:47.728841   32528 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:13:47.728887   32528 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:13:47.729019   32528 host.go:66] Checking if "ha-821265-m03" exists ...
	I0422 11:13:47.729360   32528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:47.729405   32528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:47.744689   32528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38393
	I0422 11:13:47.745233   32528 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:47.745713   32528 main.go:141] libmachine: Using API Version  1
	I0422 11:13:47.745766   32528 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:47.746101   32528 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:47.746268   32528 main.go:141] libmachine: (ha-821265-m03) Calling .DriverName
	I0422 11:13:47.746432   32528 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:13:47.746452   32528 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHHostname
	I0422 11:13:47.749492   32528 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:13:47.749861   32528 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:13:47.749881   32528 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:13:47.750041   32528 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHPort
	I0422 11:13:47.750178   32528 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:13:47.750325   32528 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHUsername
	I0422 11:13:47.750464   32528 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03/id_rsa Username:docker}
	I0422 11:13:47.829796   32528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:13:47.848910   32528 kubeconfig.go:125] found "ha-821265" server: "https://192.168.39.254:8443"
	I0422 11:13:47.848942   32528 api_server.go:166] Checking apiserver status ...
	I0422 11:13:47.848983   32528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 11:13:47.866691   32528 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1605/cgroup
	W0422 11:13:47.877525   32528 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1605/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 11:13:47.877572   32528 ssh_runner.go:195] Run: ls
	I0422 11:13:47.882770   32528 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 11:13:47.888251   32528 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 11:13:47.888279   32528 status.go:422] ha-821265-m03 apiserver status = Running (err=<nil>)
	I0422 11:13:47.888290   32528 status.go:257] ha-821265-m03 status: &{Name:ha-821265-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 11:13:47.888303   32528 status.go:255] checking status of ha-821265-m04 ...
	I0422 11:13:47.888599   32528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:47.888633   32528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:47.903171   32528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38763
	I0422 11:13:47.903657   32528 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:47.904117   32528 main.go:141] libmachine: Using API Version  1
	I0422 11:13:47.904134   32528 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:47.904428   32528 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:47.904587   32528 main.go:141] libmachine: (ha-821265-m04) Calling .GetState
	I0422 11:13:47.906190   32528 status.go:330] ha-821265-m04 host status = "Running" (err=<nil>)
	I0422 11:13:47.906206   32528 host.go:66] Checking if "ha-821265-m04" exists ...
	I0422 11:13:47.906605   32528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:47.906651   32528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:47.920854   32528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38413
	I0422 11:13:47.921254   32528 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:47.921731   32528 main.go:141] libmachine: Using API Version  1
	I0422 11:13:47.921764   32528 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:47.922063   32528 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:47.922247   32528 main.go:141] libmachine: (ha-821265-m04) Calling .GetIP
	I0422 11:13:47.925013   32528 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:13:47.925434   32528 main.go:141] libmachine: (ha-821265-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:86:f0", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:10:32 +0000 UTC Type:0 Mac:52:54:00:02:86:f0 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-821265-m04 Clientid:01:52:54:00:02:86:f0}
	I0422 11:13:47.925497   32528 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:13:47.925605   32528 host.go:66] Checking if "ha-821265-m04" exists ...
	I0422 11:13:47.926023   32528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:47.926059   32528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:47.940518   32528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46153
	I0422 11:13:47.940884   32528 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:47.941449   32528 main.go:141] libmachine: Using API Version  1
	I0422 11:13:47.941494   32528 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:47.941789   32528 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:47.941965   32528 main.go:141] libmachine: (ha-821265-m04) Calling .DriverName
	I0422 11:13:47.942162   32528 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:13:47.942180   32528 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHHostname
	I0422 11:13:47.944839   32528 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:13:47.945229   32528 main.go:141] libmachine: (ha-821265-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:86:f0", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:10:32 +0000 UTC Type:0 Mac:52:54:00:02:86:f0 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-821265-m04 Clientid:01:52:54:00:02:86:f0}
	I0422 11:13:47.945272   32528 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:13:47.945354   32528 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHPort
	I0422 11:13:47.945516   32528 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHKeyPath
	I0422 11:13:47.945687   32528 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHUsername
	I0422 11:13:47.945846   32528 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m04/id_rsa Username:docker}
	I0422 11:13:48.029027   32528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:13:48.048601   32528 status.go:257] ha-821265-m04 status: &{Name:ha-821265-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-821265 status -v=7 --alsologtostderr: exit status 3 (4.774276095s)

                                                
                                                
-- stdout --
	ha-821265
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-821265-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-821265-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-821265-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 11:13:49.478801   32628 out.go:291] Setting OutFile to fd 1 ...
	I0422 11:13:49.478913   32628 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:13:49.478922   32628 out.go:304] Setting ErrFile to fd 2...
	I0422 11:13:49.478926   32628 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:13:49.479122   32628 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
	I0422 11:13:49.479715   32628 out.go:298] Setting JSON to false
	I0422 11:13:49.479748   32628 mustload.go:65] Loading cluster: ha-821265
	I0422 11:13:49.480317   32628 notify.go:220] Checking for updates...
	I0422 11:13:49.481042   32628 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:13:49.481069   32628 status.go:255] checking status of ha-821265 ...
	I0422 11:13:49.481658   32628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:49.481703   32628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:49.500959   32628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36335
	I0422 11:13:49.501369   32628 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:49.502048   32628 main.go:141] libmachine: Using API Version  1
	I0422 11:13:49.502075   32628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:49.502492   32628 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:49.502723   32628 main.go:141] libmachine: (ha-821265) Calling .GetState
	I0422 11:13:49.504260   32628 status.go:330] ha-821265 host status = "Running" (err=<nil>)
	I0422 11:13:49.504287   32628 host.go:66] Checking if "ha-821265" exists ...
	I0422 11:13:49.504647   32628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:49.504693   32628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:49.519882   32628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46609
	I0422 11:13:49.520253   32628 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:49.520732   32628 main.go:141] libmachine: Using API Version  1
	I0422 11:13:49.520758   32628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:49.521101   32628 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:49.521308   32628 main.go:141] libmachine: (ha-821265) Calling .GetIP
	I0422 11:13:49.524354   32628 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:13:49.524809   32628 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:13:49.524842   32628 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:13:49.524987   32628 host.go:66] Checking if "ha-821265" exists ...
	I0422 11:13:49.525368   32628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:49.525411   32628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:49.541543   32628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43489
	I0422 11:13:49.541924   32628 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:49.542408   32628 main.go:141] libmachine: Using API Version  1
	I0422 11:13:49.542435   32628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:49.542769   32628 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:49.542955   32628 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:13:49.543209   32628 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:13:49.543244   32628 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:13:49.545915   32628 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:13:49.546281   32628 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:13:49.546311   32628 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:13:49.546501   32628 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:13:49.546641   32628 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:13:49.546807   32628 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:13:49.546910   32628 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:13:49.625650   32628 ssh_runner.go:195] Run: systemctl --version
	I0422 11:13:49.632461   32628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:13:49.648477   32628 kubeconfig.go:125] found "ha-821265" server: "https://192.168.39.254:8443"
	I0422 11:13:49.648502   32628 api_server.go:166] Checking apiserver status ...
	I0422 11:13:49.648542   32628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 11:13:49.663427   32628 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup
	W0422 11:13:49.674292   32628 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 11:13:49.674344   32628 ssh_runner.go:195] Run: ls
	I0422 11:13:49.679868   32628 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 11:13:49.684872   32628 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 11:13:49.684904   32628 status.go:422] ha-821265 apiserver status = Running (err=<nil>)
	I0422 11:13:49.684918   32628 status.go:257] ha-821265 status: &{Name:ha-821265 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 11:13:49.684943   32628 status.go:255] checking status of ha-821265-m02 ...
	I0422 11:13:49.685281   32628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:49.685326   32628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:49.700449   32628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43153
	I0422 11:13:49.700995   32628 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:49.701498   32628 main.go:141] libmachine: Using API Version  1
	I0422 11:13:49.701520   32628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:49.701899   32628 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:49.702108   32628 main.go:141] libmachine: (ha-821265-m02) Calling .GetState
	I0422 11:13:49.703981   32628 status.go:330] ha-821265-m02 host status = "Running" (err=<nil>)
	I0422 11:13:49.703998   32628 host.go:66] Checking if "ha-821265-m02" exists ...
	I0422 11:13:49.704317   32628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:49.704361   32628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:49.720301   32628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43931
	I0422 11:13:49.720694   32628 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:49.721222   32628 main.go:141] libmachine: Using API Version  1
	I0422 11:13:49.721249   32628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:49.721553   32628 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:49.721735   32628 main.go:141] libmachine: (ha-821265-m02) Calling .GetIP
	I0422 11:13:49.724368   32628 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:13:49.724726   32628 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:13:49.724747   32628 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:13:49.724898   32628 host.go:66] Checking if "ha-821265-m02" exists ...
	I0422 11:13:49.725236   32628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:49.725269   32628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:49.739681   32628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43991
	I0422 11:13:49.740044   32628 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:49.740475   32628 main.go:141] libmachine: Using API Version  1
	I0422 11:13:49.740494   32628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:49.740829   32628 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:49.741073   32628 main.go:141] libmachine: (ha-821265-m02) Calling .DriverName
	I0422 11:13:49.741266   32628 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:13:49.741288   32628 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHHostname
	I0422 11:13:49.744150   32628 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:13:49.744599   32628 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:13:49.744625   32628 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:13:49.744766   32628 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHPort
	I0422 11:13:49.744919   32628 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:13:49.745084   32628 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHUsername
	I0422 11:13:49.745261   32628 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02/id_rsa Username:docker}
	W0422 11:13:50.765034   32628 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.39:22: connect: no route to host
	I0422 11:13:50.765086   32628 retry.go:31] will retry after 169.909ms: dial tcp 192.168.39.39:22: connect: no route to host
	W0422 11:13:53.837012   32628 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.39:22: connect: no route to host
	W0422 11:13:53.837106   32628 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	E0422 11:13:53.837130   32628 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	I0422 11:13:53.837144   32628 status.go:257] ha-821265-m02 status: &{Name:ha-821265-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0422 11:13:53.837185   32628 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	I0422 11:13:53.837193   32628 status.go:255] checking status of ha-821265-m03 ...
	I0422 11:13:53.837556   32628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:53.837610   32628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:53.852081   32628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43315
	I0422 11:13:53.852572   32628 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:53.853018   32628 main.go:141] libmachine: Using API Version  1
	I0422 11:13:53.853041   32628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:53.853406   32628 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:53.853604   32628 main.go:141] libmachine: (ha-821265-m03) Calling .GetState
	I0422 11:13:53.855588   32628 status.go:330] ha-821265-m03 host status = "Running" (err=<nil>)
	I0422 11:13:53.855606   32628 host.go:66] Checking if "ha-821265-m03" exists ...
	I0422 11:13:53.855940   32628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:53.855978   32628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:53.870882   32628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39213
	I0422 11:13:53.871349   32628 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:53.871786   32628 main.go:141] libmachine: Using API Version  1
	I0422 11:13:53.871806   32628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:53.872092   32628 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:53.872297   32628 main.go:141] libmachine: (ha-821265-m03) Calling .GetIP
	I0422 11:13:53.875030   32628 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:13:53.875516   32628 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:13:53.875541   32628 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:13:53.875633   32628 host.go:66] Checking if "ha-821265-m03" exists ...
	I0422 11:13:53.875919   32628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:53.875957   32628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:53.889971   32628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44189
	I0422 11:13:53.890399   32628 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:53.890837   32628 main.go:141] libmachine: Using API Version  1
	I0422 11:13:53.890859   32628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:53.891177   32628 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:53.891374   32628 main.go:141] libmachine: (ha-821265-m03) Calling .DriverName
	I0422 11:13:53.891543   32628 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:13:53.891561   32628 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHHostname
	I0422 11:13:53.894372   32628 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:13:53.894752   32628 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:13:53.894776   32628 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:13:53.894945   32628 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHPort
	I0422 11:13:53.895118   32628 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:13:53.895287   32628 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHUsername
	I0422 11:13:53.895416   32628 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03/id_rsa Username:docker}
	I0422 11:13:53.978353   32628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:13:53.993131   32628 kubeconfig.go:125] found "ha-821265" server: "https://192.168.39.254:8443"
	I0422 11:13:53.993162   32628 api_server.go:166] Checking apiserver status ...
	I0422 11:13:53.993200   32628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 11:13:54.007470   32628 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1605/cgroup
	W0422 11:13:54.019181   32628 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1605/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 11:13:54.019251   32628 ssh_runner.go:195] Run: ls
	I0422 11:13:54.025363   32628 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 11:13:54.030317   32628 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 11:13:54.030339   32628 status.go:422] ha-821265-m03 apiserver status = Running (err=<nil>)
	I0422 11:13:54.030360   32628 status.go:257] ha-821265-m03 status: &{Name:ha-821265-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 11:13:54.030388   32628 status.go:255] checking status of ha-821265-m04 ...
	I0422 11:13:54.030751   32628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:54.030790   32628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:54.047496   32628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36635
	I0422 11:13:54.047894   32628 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:54.048380   32628 main.go:141] libmachine: Using API Version  1
	I0422 11:13:54.048401   32628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:54.048729   32628 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:54.048943   32628 main.go:141] libmachine: (ha-821265-m04) Calling .GetState
	I0422 11:13:54.050676   32628 status.go:330] ha-821265-m04 host status = "Running" (err=<nil>)
	I0422 11:13:54.050693   32628 host.go:66] Checking if "ha-821265-m04" exists ...
	I0422 11:13:54.051098   32628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:54.051141   32628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:54.065592   32628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36257
	I0422 11:13:54.066017   32628 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:54.066492   32628 main.go:141] libmachine: Using API Version  1
	I0422 11:13:54.066512   32628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:54.066888   32628 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:54.067128   32628 main.go:141] libmachine: (ha-821265-m04) Calling .GetIP
	I0422 11:13:54.070102   32628 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:13:54.070606   32628 main.go:141] libmachine: (ha-821265-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:86:f0", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:10:32 +0000 UTC Type:0 Mac:52:54:00:02:86:f0 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-821265-m04 Clientid:01:52:54:00:02:86:f0}
	I0422 11:13:54.070634   32628 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:13:54.070774   32628 host.go:66] Checking if "ha-821265-m04" exists ...
	I0422 11:13:54.071150   32628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:54.071192   32628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:54.085881   32628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45363
	I0422 11:13:54.086266   32628 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:54.086697   32628 main.go:141] libmachine: Using API Version  1
	I0422 11:13:54.086715   32628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:54.086988   32628 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:54.087200   32628 main.go:141] libmachine: (ha-821265-m04) Calling .DriverName
	I0422 11:13:54.087377   32628 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:13:54.087399   32628 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHHostname
	I0422 11:13:54.090302   32628 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:13:54.090707   32628 main.go:141] libmachine: (ha-821265-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:86:f0", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:10:32 +0000 UTC Type:0 Mac:52:54:00:02:86:f0 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-821265-m04 Clientid:01:52:54:00:02:86:f0}
	I0422 11:13:54.090728   32628 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:13:54.090876   32628 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHPort
	I0422 11:13:54.091037   32628 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHKeyPath
	I0422 11:13:54.091196   32628 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHUsername
	I0422 11:13:54.091340   32628 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m04/id_rsa Username:docker}
	I0422 11:13:54.173143   32628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:13:54.189695   32628 status.go:257] ha-821265-m04 status: &{Name:ha-821265-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-821265 status -v=7 --alsologtostderr: exit status 3 (4.249655527s)

                                                
                                                
-- stdout --
	ha-821265
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-821265-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-821265-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-821265-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 11:13:56.272818   32729 out.go:291] Setting OutFile to fd 1 ...
	I0422 11:13:56.272940   32729 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:13:56.272949   32729 out.go:304] Setting ErrFile to fd 2...
	I0422 11:13:56.272953   32729 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:13:56.273226   32729 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
	I0422 11:13:56.273404   32729 out.go:298] Setting JSON to false
	I0422 11:13:56.273427   32729 mustload.go:65] Loading cluster: ha-821265
	I0422 11:13:56.273530   32729 notify.go:220] Checking for updates...
	I0422 11:13:56.273789   32729 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:13:56.273803   32729 status.go:255] checking status of ha-821265 ...
	I0422 11:13:56.274253   32729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:56.274308   32729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:56.290979   32729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33997
	I0422 11:13:56.291403   32729 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:56.291940   32729 main.go:141] libmachine: Using API Version  1
	I0422 11:13:56.291962   32729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:56.292328   32729 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:56.292523   32729 main.go:141] libmachine: (ha-821265) Calling .GetState
	I0422 11:13:56.293985   32729 status.go:330] ha-821265 host status = "Running" (err=<nil>)
	I0422 11:13:56.294004   32729 host.go:66] Checking if "ha-821265" exists ...
	I0422 11:13:56.294280   32729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:56.294314   32729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:56.308572   32729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39973
	I0422 11:13:56.308982   32729 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:56.309399   32729 main.go:141] libmachine: Using API Version  1
	I0422 11:13:56.309418   32729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:56.309714   32729 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:56.309897   32729 main.go:141] libmachine: (ha-821265) Calling .GetIP
	I0422 11:13:56.312660   32729 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:13:56.313097   32729 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:13:56.313119   32729 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:13:56.313278   32729 host.go:66] Checking if "ha-821265" exists ...
	I0422 11:13:56.313540   32729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:56.313571   32729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:56.327545   32729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45255
	I0422 11:13:56.327927   32729 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:56.328328   32729 main.go:141] libmachine: Using API Version  1
	I0422 11:13:56.328354   32729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:56.328663   32729 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:56.328902   32729 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:13:56.329093   32729 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:13:56.329135   32729 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:13:56.331829   32729 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:13:56.332289   32729 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:13:56.332330   32729 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:13:56.332442   32729 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:13:56.332632   32729 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:13:56.332764   32729 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:13:56.332929   32729 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:13:56.419093   32729 ssh_runner.go:195] Run: systemctl --version
	I0422 11:13:56.426555   32729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:13:56.448848   32729 kubeconfig.go:125] found "ha-821265" server: "https://192.168.39.254:8443"
	I0422 11:13:56.448882   32729 api_server.go:166] Checking apiserver status ...
	I0422 11:13:56.448971   32729 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 11:13:56.467984   32729 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup
	W0422 11:13:56.481624   32729 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 11:13:56.481679   32729 ssh_runner.go:195] Run: ls
	I0422 11:13:56.487405   32729 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 11:13:56.491542   32729 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 11:13:56.491560   32729 status.go:422] ha-821265 apiserver status = Running (err=<nil>)
	I0422 11:13:56.491570   32729 status.go:257] ha-821265 status: &{Name:ha-821265 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 11:13:56.491584   32729 status.go:255] checking status of ha-821265-m02 ...
	I0422 11:13:56.491852   32729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:56.491876   32729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:56.506222   32729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36671
	I0422 11:13:56.506633   32729 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:56.507073   32729 main.go:141] libmachine: Using API Version  1
	I0422 11:13:56.507091   32729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:56.507373   32729 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:56.507542   32729 main.go:141] libmachine: (ha-821265-m02) Calling .GetState
	I0422 11:13:56.509094   32729 status.go:330] ha-821265-m02 host status = "Running" (err=<nil>)
	I0422 11:13:56.509107   32729 host.go:66] Checking if "ha-821265-m02" exists ...
	I0422 11:13:56.509356   32729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:56.509376   32729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:56.524621   32729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46509
	I0422 11:13:56.525034   32729 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:56.525529   32729 main.go:141] libmachine: Using API Version  1
	I0422 11:13:56.525543   32729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:56.525826   32729 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:56.525999   32729 main.go:141] libmachine: (ha-821265-m02) Calling .GetIP
	I0422 11:13:56.528697   32729 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:13:56.529201   32729 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:13:56.529230   32729 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:13:56.529388   32729 host.go:66] Checking if "ha-821265-m02" exists ...
	I0422 11:13:56.529705   32729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:13:56.529765   32729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:13:56.545162   32729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34059
	I0422 11:13:56.545537   32729 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:13:56.546044   32729 main.go:141] libmachine: Using API Version  1
	I0422 11:13:56.546070   32729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:13:56.546361   32729 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:13:56.546526   32729 main.go:141] libmachine: (ha-821265-m02) Calling .DriverName
	I0422 11:13:56.546704   32729 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:13:56.546725   32729 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHHostname
	I0422 11:13:56.549889   32729 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:13:56.550326   32729 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:13:56.550355   32729 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:13:56.550497   32729 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHPort
	I0422 11:13:56.550680   32729 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:13:56.550815   32729 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHUsername
	I0422 11:13:56.550961   32729 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02/id_rsa Username:docker}
	W0422 11:13:56.905014   32729 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.39:22: connect: no route to host
	I0422 11:13:56.905085   32729 retry.go:31] will retry after 143.349299ms: dial tcp 192.168.39.39:22: connect: no route to host
	W0422 11:14:00.105026   32729 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.39:22: connect: no route to host
	W0422 11:14:00.105097   32729 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	E0422 11:14:00.105110   32729 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	I0422 11:14:00.105116   32729 status.go:257] ha-821265-m02 status: &{Name:ha-821265-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0422 11:14:00.105140   32729 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	I0422 11:14:00.105150   32729 status.go:255] checking status of ha-821265-m03 ...
	I0422 11:14:00.105454   32729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:00.105498   32729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:00.121736   32729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41343
	I0422 11:14:00.122167   32729 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:00.122641   32729 main.go:141] libmachine: Using API Version  1
	I0422 11:14:00.122673   32729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:00.123018   32729 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:00.123190   32729 main.go:141] libmachine: (ha-821265-m03) Calling .GetState
	I0422 11:14:00.124626   32729 status.go:330] ha-821265-m03 host status = "Running" (err=<nil>)
	I0422 11:14:00.124641   32729 host.go:66] Checking if "ha-821265-m03" exists ...
	I0422 11:14:00.124942   32729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:00.124977   32729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:00.140366   32729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46701
	I0422 11:14:00.140732   32729 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:00.141240   32729 main.go:141] libmachine: Using API Version  1
	I0422 11:14:00.141264   32729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:00.141616   32729 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:00.141827   32729 main.go:141] libmachine: (ha-821265-m03) Calling .GetIP
	I0422 11:14:00.144789   32729 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:14:00.145345   32729 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:14:00.145380   32729 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:14:00.145532   32729 host.go:66] Checking if "ha-821265-m03" exists ...
	I0422 11:14:00.145838   32729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:00.145884   32729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:00.160017   32729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40625
	I0422 11:14:00.160369   32729 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:00.160738   32729 main.go:141] libmachine: Using API Version  1
	I0422 11:14:00.160759   32729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:00.161061   32729 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:00.161308   32729 main.go:141] libmachine: (ha-821265-m03) Calling .DriverName
	I0422 11:14:00.161501   32729 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:14:00.161519   32729 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHHostname
	I0422 11:14:00.164280   32729 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:14:00.164650   32729 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:14:00.164685   32729 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:14:00.164765   32729 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHPort
	I0422 11:14:00.164979   32729 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:14:00.165115   32729 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHUsername
	I0422 11:14:00.165319   32729 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03/id_rsa Username:docker}
	I0422 11:14:00.240913   32729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:14:00.261650   32729 kubeconfig.go:125] found "ha-821265" server: "https://192.168.39.254:8443"
	I0422 11:14:00.261674   32729 api_server.go:166] Checking apiserver status ...
	I0422 11:14:00.261704   32729 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 11:14:00.280867   32729 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1605/cgroup
	W0422 11:14:00.294002   32729 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1605/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 11:14:00.294057   32729 ssh_runner.go:195] Run: ls
	I0422 11:14:00.299658   32729 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 11:14:00.307636   32729 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 11:14:00.307657   32729 status.go:422] ha-821265-m03 apiserver status = Running (err=<nil>)
	I0422 11:14:00.307666   32729 status.go:257] ha-821265-m03 status: &{Name:ha-821265-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 11:14:00.307680   32729 status.go:255] checking status of ha-821265-m04 ...
	I0422 11:14:00.307949   32729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:00.307981   32729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:00.322874   32729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45623
	I0422 11:14:00.323337   32729 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:00.323826   32729 main.go:141] libmachine: Using API Version  1
	I0422 11:14:00.323850   32729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:00.324111   32729 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:00.324302   32729 main.go:141] libmachine: (ha-821265-m04) Calling .GetState
	I0422 11:14:00.325857   32729 status.go:330] ha-821265-m04 host status = "Running" (err=<nil>)
	I0422 11:14:00.325874   32729 host.go:66] Checking if "ha-821265-m04" exists ...
	I0422 11:14:00.326125   32729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:00.326149   32729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:00.340074   32729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42933
	I0422 11:14:00.340504   32729 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:00.340973   32729 main.go:141] libmachine: Using API Version  1
	I0422 11:14:00.340995   32729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:00.341289   32729 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:00.341431   32729 main.go:141] libmachine: (ha-821265-m04) Calling .GetIP
	I0422 11:14:00.344371   32729 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:14:00.344905   32729 main.go:141] libmachine: (ha-821265-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:86:f0", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:10:32 +0000 UTC Type:0 Mac:52:54:00:02:86:f0 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-821265-m04 Clientid:01:52:54:00:02:86:f0}
	I0422 11:14:00.344941   32729 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:14:00.345114   32729 host.go:66] Checking if "ha-821265-m04" exists ...
	I0422 11:14:00.345502   32729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:00.345550   32729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:00.359392   32729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42393
	I0422 11:14:00.359871   32729 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:00.360407   32729 main.go:141] libmachine: Using API Version  1
	I0422 11:14:00.360441   32729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:00.360808   32729 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:00.360987   32729 main.go:141] libmachine: (ha-821265-m04) Calling .DriverName
	I0422 11:14:00.361209   32729 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:14:00.361236   32729 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHHostname
	I0422 11:14:00.363844   32729 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:14:00.364294   32729 main.go:141] libmachine: (ha-821265-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:86:f0", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:10:32 +0000 UTC Type:0 Mac:52:54:00:02:86:f0 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-821265-m04 Clientid:01:52:54:00:02:86:f0}
	I0422 11:14:00.364335   32729 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:14:00.364479   32729 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHPort
	I0422 11:14:00.364651   32729 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHKeyPath
	I0422 11:14:00.364827   32729 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHUsername
	I0422 11:14:00.365000   32729 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m04/id_rsa Username:docker}
	I0422 11:14:00.449179   32729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:14:00.467149   32729 status.go:257] ha-821265-m04 status: &{Name:ha-821265-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0422 11:14:01.487683   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-821265 status -v=7 --alsologtostderr: exit status 3 (3.737423843s)

                                                
                                                
-- stdout --
	ha-821265
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-821265-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-821265-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-821265-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 11:14:03.119683   32829 out.go:291] Setting OutFile to fd 1 ...
	I0422 11:14:03.119928   32829 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:14:03.119942   32829 out.go:304] Setting ErrFile to fd 2...
	I0422 11:14:03.119949   32829 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:14:03.120157   32829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
	I0422 11:14:03.120326   32829 out.go:298] Setting JSON to false
	I0422 11:14:03.120355   32829 mustload.go:65] Loading cluster: ha-821265
	I0422 11:14:03.120400   32829 notify.go:220] Checking for updates...
	I0422 11:14:03.120752   32829 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:14:03.120784   32829 status.go:255] checking status of ha-821265 ...
	I0422 11:14:03.121223   32829 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:03.121305   32829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:03.137106   32829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45817
	I0422 11:14:03.137552   32829 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:03.138116   32829 main.go:141] libmachine: Using API Version  1
	I0422 11:14:03.138145   32829 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:03.138459   32829 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:03.138754   32829 main.go:141] libmachine: (ha-821265) Calling .GetState
	I0422 11:14:03.140727   32829 status.go:330] ha-821265 host status = "Running" (err=<nil>)
	I0422 11:14:03.140743   32829 host.go:66] Checking if "ha-821265" exists ...
	I0422 11:14:03.141207   32829 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:03.141258   32829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:03.155673   32829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40881
	I0422 11:14:03.156160   32829 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:03.156682   32829 main.go:141] libmachine: Using API Version  1
	I0422 11:14:03.156710   32829 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:03.157037   32829 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:03.157192   32829 main.go:141] libmachine: (ha-821265) Calling .GetIP
	I0422 11:14:03.159939   32829 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:14:03.160419   32829 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:14:03.160437   32829 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:14:03.160621   32829 host.go:66] Checking if "ha-821265" exists ...
	I0422 11:14:03.161055   32829 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:03.161119   32829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:03.175816   32829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35385
	I0422 11:14:03.176178   32829 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:03.176646   32829 main.go:141] libmachine: Using API Version  1
	I0422 11:14:03.176671   32829 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:03.176995   32829 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:03.177186   32829 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:14:03.177375   32829 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:14:03.177400   32829 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:14:03.179942   32829 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:14:03.180374   32829 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:14:03.180393   32829 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:14:03.180540   32829 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:14:03.180685   32829 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:14:03.180816   32829 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:14:03.180973   32829 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:14:03.263584   32829 ssh_runner.go:195] Run: systemctl --version
	I0422 11:14:03.270997   32829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:14:03.287128   32829 kubeconfig.go:125] found "ha-821265" server: "https://192.168.39.254:8443"
	I0422 11:14:03.287153   32829 api_server.go:166] Checking apiserver status ...
	I0422 11:14:03.287186   32829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 11:14:03.302360   32829 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup
	W0422 11:14:03.313860   32829 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 11:14:03.313923   32829 ssh_runner.go:195] Run: ls
	I0422 11:14:03.320156   32829 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 11:14:03.324293   32829 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 11:14:03.324313   32829 status.go:422] ha-821265 apiserver status = Running (err=<nil>)
	I0422 11:14:03.324324   32829 status.go:257] ha-821265 status: &{Name:ha-821265 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 11:14:03.324338   32829 status.go:255] checking status of ha-821265-m02 ...
	I0422 11:14:03.324631   32829 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:03.324695   32829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:03.339172   32829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43717
	I0422 11:14:03.339567   32829 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:03.340051   32829 main.go:141] libmachine: Using API Version  1
	I0422 11:14:03.340073   32829 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:03.340361   32829 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:03.340590   32829 main.go:141] libmachine: (ha-821265-m02) Calling .GetState
	I0422 11:14:03.342144   32829 status.go:330] ha-821265-m02 host status = "Running" (err=<nil>)
	I0422 11:14:03.342157   32829 host.go:66] Checking if "ha-821265-m02" exists ...
	I0422 11:14:03.342475   32829 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:03.342507   32829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:03.357501   32829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34517
	I0422 11:14:03.357923   32829 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:03.358457   32829 main.go:141] libmachine: Using API Version  1
	I0422 11:14:03.358479   32829 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:03.358814   32829 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:03.359038   32829 main.go:141] libmachine: (ha-821265-m02) Calling .GetIP
	I0422 11:14:03.362117   32829 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:14:03.362654   32829 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:14:03.362676   32829 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:14:03.362811   32829 host.go:66] Checking if "ha-821265-m02" exists ...
	I0422 11:14:03.363079   32829 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:03.363111   32829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:03.377515   32829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45917
	I0422 11:14:03.377922   32829 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:03.378363   32829 main.go:141] libmachine: Using API Version  1
	I0422 11:14:03.378382   32829 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:03.378734   32829 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:03.378928   32829 main.go:141] libmachine: (ha-821265-m02) Calling .DriverName
	I0422 11:14:03.379129   32829 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:14:03.379152   32829 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHHostname
	I0422 11:14:03.382293   32829 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:14:03.382690   32829 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:14:03.382712   32829 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:14:03.382879   32829 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHPort
	I0422 11:14:03.383053   32829 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:14:03.383210   32829 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHUsername
	I0422 11:14:03.383341   32829 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02/id_rsa Username:docker}
	W0422 11:14:06.441054   32829 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.39:22: connect: no route to host
	W0422 11:14:06.441175   32829 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	E0422 11:14:06.441201   32829 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	I0422 11:14:06.441224   32829 status.go:257] ha-821265-m02 status: &{Name:ha-821265-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0422 11:14:06.441248   32829 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	I0422 11:14:06.441261   32829 status.go:255] checking status of ha-821265-m03 ...
	I0422 11:14:06.441558   32829 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:06.441597   32829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:06.456355   32829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38283
	I0422 11:14:06.456814   32829 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:06.457267   32829 main.go:141] libmachine: Using API Version  1
	I0422 11:14:06.457290   32829 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:06.457621   32829 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:06.457793   32829 main.go:141] libmachine: (ha-821265-m03) Calling .GetState
	I0422 11:14:06.459288   32829 status.go:330] ha-821265-m03 host status = "Running" (err=<nil>)
	I0422 11:14:06.459303   32829 host.go:66] Checking if "ha-821265-m03" exists ...
	I0422 11:14:06.459598   32829 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:06.459636   32829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:06.475495   32829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35583
	I0422 11:14:06.476030   32829 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:06.476513   32829 main.go:141] libmachine: Using API Version  1
	I0422 11:14:06.476537   32829 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:06.476898   32829 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:06.477095   32829 main.go:141] libmachine: (ha-821265-m03) Calling .GetIP
	I0422 11:14:06.480077   32829 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:14:06.480455   32829 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:14:06.480494   32829 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:14:06.480639   32829 host.go:66] Checking if "ha-821265-m03" exists ...
	I0422 11:14:06.481078   32829 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:06.481120   32829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:06.496440   32829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39545
	I0422 11:14:06.496879   32829 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:06.497267   32829 main.go:141] libmachine: Using API Version  1
	I0422 11:14:06.497292   32829 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:06.497554   32829 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:06.497696   32829 main.go:141] libmachine: (ha-821265-m03) Calling .DriverName
	I0422 11:14:06.497853   32829 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:14:06.497872   32829 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHHostname
	I0422 11:14:06.500489   32829 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:14:06.501006   32829 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:14:06.501029   32829 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:14:06.501158   32829 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHPort
	I0422 11:14:06.501343   32829 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:14:06.501525   32829 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHUsername
	I0422 11:14:06.501645   32829 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03/id_rsa Username:docker}
	I0422 11:14:06.581276   32829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:14:06.599121   32829 kubeconfig.go:125] found "ha-821265" server: "https://192.168.39.254:8443"
	I0422 11:14:06.599149   32829 api_server.go:166] Checking apiserver status ...
	I0422 11:14:06.599192   32829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 11:14:06.615956   32829 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1605/cgroup
	W0422 11:14:06.627223   32829 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1605/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 11:14:06.627287   32829 ssh_runner.go:195] Run: ls
	I0422 11:14:06.632501   32829 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 11:14:06.641073   32829 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 11:14:06.641095   32829 status.go:422] ha-821265-m03 apiserver status = Running (err=<nil>)
	I0422 11:14:06.641104   32829 status.go:257] ha-821265-m03 status: &{Name:ha-821265-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 11:14:06.641117   32829 status.go:255] checking status of ha-821265-m04 ...
	I0422 11:14:06.641400   32829 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:06.641431   32829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:06.656641   32829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41653
	I0422 11:14:06.657053   32829 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:06.657518   32829 main.go:141] libmachine: Using API Version  1
	I0422 11:14:06.657539   32829 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:06.657879   32829 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:06.658070   32829 main.go:141] libmachine: (ha-821265-m04) Calling .GetState
	I0422 11:14:06.659939   32829 status.go:330] ha-821265-m04 host status = "Running" (err=<nil>)
	I0422 11:14:06.659955   32829 host.go:66] Checking if "ha-821265-m04" exists ...
	I0422 11:14:06.660218   32829 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:06.660255   32829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:06.675478   32829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43379
	I0422 11:14:06.675882   32829 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:06.676278   32829 main.go:141] libmachine: Using API Version  1
	I0422 11:14:06.676297   32829 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:06.676585   32829 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:06.676763   32829 main.go:141] libmachine: (ha-821265-m04) Calling .GetIP
	I0422 11:14:06.679729   32829 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:14:06.680180   32829 main.go:141] libmachine: (ha-821265-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:86:f0", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:10:32 +0000 UTC Type:0 Mac:52:54:00:02:86:f0 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-821265-m04 Clientid:01:52:54:00:02:86:f0}
	I0422 11:14:06.680205   32829 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:14:06.680372   32829 host.go:66] Checking if "ha-821265-m04" exists ...
	I0422 11:14:06.680712   32829 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:06.680754   32829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:06.696289   32829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46215
	I0422 11:14:06.696693   32829 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:06.697258   32829 main.go:141] libmachine: Using API Version  1
	I0422 11:14:06.697283   32829 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:06.697597   32829 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:06.697804   32829 main.go:141] libmachine: (ha-821265-m04) Calling .DriverName
	I0422 11:14:06.697998   32829 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:14:06.698018   32829 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHHostname
	I0422 11:14:06.701018   32829 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:14:06.701477   32829 main.go:141] libmachine: (ha-821265-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:86:f0", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:10:32 +0000 UTC Type:0 Mac:52:54:00:02:86:f0 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-821265-m04 Clientid:01:52:54:00:02:86:f0}
	I0422 11:14:06.701501   32829 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:14:06.701686   32829 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHPort
	I0422 11:14:06.701883   32829 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHKeyPath
	I0422 11:14:06.702047   32829 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHUsername
	I0422 11:14:06.702201   32829 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m04/id_rsa Username:docker}
	I0422 11:14:06.784990   32829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:14:06.800604   32829 status.go:257] ha-821265-m04 status: &{Name:ha-821265-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-821265 status -v=7 --alsologtostderr: exit status 3 (3.755361997s)

                                                
                                                
-- stdout --
	ha-821265
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-821265-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-821265-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-821265-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 11:14:09.654318   32947 out.go:291] Setting OutFile to fd 1 ...
	I0422 11:14:09.654709   32947 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:14:09.654727   32947 out.go:304] Setting ErrFile to fd 2...
	I0422 11:14:09.654735   32947 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:14:09.655188   32947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
	I0422 11:14:09.655474   32947 out.go:298] Setting JSON to false
	I0422 11:14:09.655505   32947 mustload.go:65] Loading cluster: ha-821265
	I0422 11:14:09.655687   32947 notify.go:220] Checking for updates...
	I0422 11:14:09.656363   32947 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:14:09.656390   32947 status.go:255] checking status of ha-821265 ...
	I0422 11:14:09.656913   32947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:09.656965   32947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:09.673355   32947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43145
	I0422 11:14:09.673780   32947 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:09.674382   32947 main.go:141] libmachine: Using API Version  1
	I0422 11:14:09.674406   32947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:09.674810   32947 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:09.675062   32947 main.go:141] libmachine: (ha-821265) Calling .GetState
	I0422 11:14:09.676630   32947 status.go:330] ha-821265 host status = "Running" (err=<nil>)
	I0422 11:14:09.676649   32947 host.go:66] Checking if "ha-821265" exists ...
	I0422 11:14:09.676955   32947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:09.677003   32947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:09.692017   32947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38723
	I0422 11:14:09.692414   32947 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:09.692892   32947 main.go:141] libmachine: Using API Version  1
	I0422 11:14:09.692918   32947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:09.693226   32947 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:09.693395   32947 main.go:141] libmachine: (ha-821265) Calling .GetIP
	I0422 11:14:09.696513   32947 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:14:09.696914   32947 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:14:09.696944   32947 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:14:09.697059   32947 host.go:66] Checking if "ha-821265" exists ...
	I0422 11:14:09.697347   32947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:09.697384   32947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:09.712480   32947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35617
	I0422 11:14:09.712897   32947 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:09.713320   32947 main.go:141] libmachine: Using API Version  1
	I0422 11:14:09.713341   32947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:09.713673   32947 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:09.713913   32947 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:14:09.714143   32947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:14:09.714179   32947 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:14:09.717076   32947 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:14:09.717540   32947 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:14:09.717574   32947 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:14:09.717670   32947 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:14:09.717828   32947 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:14:09.717973   32947 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:14:09.718113   32947 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:14:09.801612   32947 ssh_runner.go:195] Run: systemctl --version
	I0422 11:14:09.808665   32947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:14:09.826742   32947 kubeconfig.go:125] found "ha-821265" server: "https://192.168.39.254:8443"
	I0422 11:14:09.826770   32947 api_server.go:166] Checking apiserver status ...
	I0422 11:14:09.826806   32947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 11:14:09.850861   32947 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup
	W0422 11:14:09.861099   32947 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 11:14:09.861158   32947 ssh_runner.go:195] Run: ls
	I0422 11:14:09.866653   32947 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 11:14:09.871066   32947 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 11:14:09.871088   32947 status.go:422] ha-821265 apiserver status = Running (err=<nil>)
	I0422 11:14:09.871101   32947 status.go:257] ha-821265 status: &{Name:ha-821265 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 11:14:09.871125   32947 status.go:255] checking status of ha-821265-m02 ...
	I0422 11:14:09.871412   32947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:09.871450   32947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:09.886553   32947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38309
	I0422 11:14:09.887003   32947 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:09.887484   32947 main.go:141] libmachine: Using API Version  1
	I0422 11:14:09.887497   32947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:09.887886   32947 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:09.888070   32947 main.go:141] libmachine: (ha-821265-m02) Calling .GetState
	I0422 11:14:09.889581   32947 status.go:330] ha-821265-m02 host status = "Running" (err=<nil>)
	I0422 11:14:09.889597   32947 host.go:66] Checking if "ha-821265-m02" exists ...
	I0422 11:14:09.889998   32947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:09.890040   32947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:09.904455   32947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36617
	I0422 11:14:09.904896   32947 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:09.905353   32947 main.go:141] libmachine: Using API Version  1
	I0422 11:14:09.905385   32947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:09.905673   32947 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:09.905847   32947 main.go:141] libmachine: (ha-821265-m02) Calling .GetIP
	I0422 11:14:09.908275   32947 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:14:09.908668   32947 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:14:09.908694   32947 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:14:09.908899   32947 host.go:66] Checking if "ha-821265-m02" exists ...
	I0422 11:14:09.909187   32947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:09.909224   32947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:09.926059   32947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35333
	I0422 11:14:09.926591   32947 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:09.927158   32947 main.go:141] libmachine: Using API Version  1
	I0422 11:14:09.927179   32947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:09.927513   32947 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:09.927683   32947 main.go:141] libmachine: (ha-821265-m02) Calling .DriverName
	I0422 11:14:09.927889   32947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:14:09.927913   32947 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHHostname
	I0422 11:14:09.930917   32947 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:14:09.931344   32947 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:14:09.931382   32947 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:14:09.931494   32947 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHPort
	I0422 11:14:09.931700   32947 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:14:09.931878   32947 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHUsername
	I0422 11:14:09.932058   32947 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02/id_rsa Username:docker}
	W0422 11:14:13.001032   32947 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.39:22: connect: no route to host
	W0422 11:14:13.001134   32947 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	E0422 11:14:13.001151   32947 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	I0422 11:14:13.001158   32947 status.go:257] ha-821265-m02 status: &{Name:ha-821265-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0422 11:14:13.001174   32947 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	I0422 11:14:13.001181   32947 status.go:255] checking status of ha-821265-m03 ...
	I0422 11:14:13.001473   32947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:13.001518   32947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:13.016538   32947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35457
	I0422 11:14:13.016974   32947 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:13.017446   32947 main.go:141] libmachine: Using API Version  1
	I0422 11:14:13.017477   32947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:13.017765   32947 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:13.017960   32947 main.go:141] libmachine: (ha-821265-m03) Calling .GetState
	I0422 11:14:13.019755   32947 status.go:330] ha-821265-m03 host status = "Running" (err=<nil>)
	I0422 11:14:13.019774   32947 host.go:66] Checking if "ha-821265-m03" exists ...
	I0422 11:14:13.020169   32947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:13.020212   32947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:13.035145   32947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40221
	I0422 11:14:13.035520   32947 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:13.035984   32947 main.go:141] libmachine: Using API Version  1
	I0422 11:14:13.036019   32947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:13.036321   32947 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:13.036504   32947 main.go:141] libmachine: (ha-821265-m03) Calling .GetIP
	I0422 11:14:13.039258   32947 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:14:13.039678   32947 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:14:13.039715   32947 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:14:13.039868   32947 host.go:66] Checking if "ha-821265-m03" exists ...
	I0422 11:14:13.040286   32947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:13.040326   32947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:13.054282   32947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44685
	I0422 11:14:13.054729   32947 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:13.055182   32947 main.go:141] libmachine: Using API Version  1
	I0422 11:14:13.055225   32947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:13.055517   32947 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:13.055689   32947 main.go:141] libmachine: (ha-821265-m03) Calling .DriverName
	I0422 11:14:13.055876   32947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:14:13.055901   32947 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHHostname
	I0422 11:14:13.058423   32947 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:14:13.058828   32947 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:14:13.058859   32947 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:14:13.058980   32947 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHPort
	I0422 11:14:13.059161   32947 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:14:13.059333   32947 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHUsername
	I0422 11:14:13.059455   32947 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03/id_rsa Username:docker}
	I0422 11:14:13.137454   32947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:14:13.153986   32947 kubeconfig.go:125] found "ha-821265" server: "https://192.168.39.254:8443"
	I0422 11:14:13.154012   32947 api_server.go:166] Checking apiserver status ...
	I0422 11:14:13.154050   32947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 11:14:13.168696   32947 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1605/cgroup
	W0422 11:14:13.179129   32947 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1605/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 11:14:13.179195   32947 ssh_runner.go:195] Run: ls
	I0422 11:14:13.184524   32947 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 11:14:13.194189   32947 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 11:14:13.194212   32947 status.go:422] ha-821265-m03 apiserver status = Running (err=<nil>)
	I0422 11:14:13.194238   32947 status.go:257] ha-821265-m03 status: &{Name:ha-821265-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 11:14:13.194254   32947 status.go:255] checking status of ha-821265-m04 ...
	I0422 11:14:13.194614   32947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:13.194656   32947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:13.209453   32947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38571
	I0422 11:14:13.209805   32947 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:13.210235   32947 main.go:141] libmachine: Using API Version  1
	I0422 11:14:13.210271   32947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:13.210554   32947 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:13.210749   32947 main.go:141] libmachine: (ha-821265-m04) Calling .GetState
	I0422 11:14:13.212199   32947 status.go:330] ha-821265-m04 host status = "Running" (err=<nil>)
	I0422 11:14:13.212216   32947 host.go:66] Checking if "ha-821265-m04" exists ...
	I0422 11:14:13.212596   32947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:13.212652   32947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:13.227813   32947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46449
	I0422 11:14:13.228287   32947 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:13.228736   32947 main.go:141] libmachine: Using API Version  1
	I0422 11:14:13.228754   32947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:13.229061   32947 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:13.229242   32947 main.go:141] libmachine: (ha-821265-m04) Calling .GetIP
	I0422 11:14:13.232419   32947 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:14:13.232922   32947 main.go:141] libmachine: (ha-821265-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:86:f0", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:10:32 +0000 UTC Type:0 Mac:52:54:00:02:86:f0 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-821265-m04 Clientid:01:52:54:00:02:86:f0}
	I0422 11:14:13.232957   32947 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:14:13.233103   32947 host.go:66] Checking if "ha-821265-m04" exists ...
	I0422 11:14:13.233503   32947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:13.233572   32947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:13.247584   32947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43139
	I0422 11:14:13.247977   32947 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:13.248423   32947 main.go:141] libmachine: Using API Version  1
	I0422 11:14:13.248448   32947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:13.248794   32947 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:13.248987   32947 main.go:141] libmachine: (ha-821265-m04) Calling .DriverName
	I0422 11:14:13.249184   32947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:14:13.249214   32947 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHHostname
	I0422 11:14:13.251675   32947 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:14:13.252067   32947 main.go:141] libmachine: (ha-821265-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:86:f0", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:10:32 +0000 UTC Type:0 Mac:52:54:00:02:86:f0 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-821265-m04 Clientid:01:52:54:00:02:86:f0}
	I0422 11:14:13.252105   32947 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:14:13.252241   32947 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHPort
	I0422 11:14:13.252475   32947 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHKeyPath
	I0422 11:14:13.252624   32947 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHUsername
	I0422 11:14:13.252748   32947 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m04/id_rsa Username:docker}
	I0422 11:14:13.337718   32947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:14:13.353717   32947 status.go:257] ha-821265-m04 status: &{Name:ha-821265-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-821265 status -v=7 --alsologtostderr: exit status 3 (3.727452319s)

                                                
                                                
-- stdout --
	ha-821265
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-821265-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-821265-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-821265-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 11:14:16.945750   33048 out.go:291] Setting OutFile to fd 1 ...
	I0422 11:14:16.945884   33048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:14:16.945893   33048 out.go:304] Setting ErrFile to fd 2...
	I0422 11:14:16.945899   33048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:14:16.946103   33048 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
	I0422 11:14:16.946257   33048 out.go:298] Setting JSON to false
	I0422 11:14:16.946280   33048 mustload.go:65] Loading cluster: ha-821265
	I0422 11:14:16.946392   33048 notify.go:220] Checking for updates...
	I0422 11:14:16.946641   33048 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:14:16.946654   33048 status.go:255] checking status of ha-821265 ...
	I0422 11:14:16.947016   33048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:16.947069   33048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:16.964940   33048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44359
	I0422 11:14:16.965357   33048 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:16.965991   33048 main.go:141] libmachine: Using API Version  1
	I0422 11:14:16.966017   33048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:16.966324   33048 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:16.966544   33048 main.go:141] libmachine: (ha-821265) Calling .GetState
	I0422 11:14:16.968199   33048 status.go:330] ha-821265 host status = "Running" (err=<nil>)
	I0422 11:14:16.968213   33048 host.go:66] Checking if "ha-821265" exists ...
	I0422 11:14:16.968590   33048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:16.968632   33048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:16.982790   33048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43917
	I0422 11:14:16.983202   33048 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:16.983625   33048 main.go:141] libmachine: Using API Version  1
	I0422 11:14:16.983648   33048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:16.983943   33048 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:16.984108   33048 main.go:141] libmachine: (ha-821265) Calling .GetIP
	I0422 11:14:16.986759   33048 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:14:16.987164   33048 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:14:16.987192   33048 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:14:16.987322   33048 host.go:66] Checking if "ha-821265" exists ...
	I0422 11:14:16.987676   33048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:16.987717   33048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:17.002423   33048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42589
	I0422 11:14:17.002818   33048 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:17.003245   33048 main.go:141] libmachine: Using API Version  1
	I0422 11:14:17.003260   33048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:17.003550   33048 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:17.003725   33048 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:14:17.003927   33048 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:14:17.003951   33048 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:14:17.006669   33048 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:14:17.007038   33048 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:14:17.007073   33048 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:14:17.007202   33048 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:14:17.007394   33048 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:14:17.007542   33048 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:14:17.007689   33048 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:14:17.089828   33048 ssh_runner.go:195] Run: systemctl --version
	I0422 11:14:17.096955   33048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:14:17.113946   33048 kubeconfig.go:125] found "ha-821265" server: "https://192.168.39.254:8443"
	I0422 11:14:17.113982   33048 api_server.go:166] Checking apiserver status ...
	I0422 11:14:17.114023   33048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 11:14:17.129387   33048 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup
	W0422 11:14:17.140868   33048 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 11:14:17.140905   33048 ssh_runner.go:195] Run: ls
	I0422 11:14:17.146318   33048 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 11:14:17.151033   33048 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 11:14:17.151054   33048 status.go:422] ha-821265 apiserver status = Running (err=<nil>)
	I0422 11:14:17.151068   33048 status.go:257] ha-821265 status: &{Name:ha-821265 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 11:14:17.151087   33048 status.go:255] checking status of ha-821265-m02 ...
	I0422 11:14:17.151370   33048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:17.151400   33048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:17.165571   33048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43489
	I0422 11:14:17.166010   33048 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:17.166575   33048 main.go:141] libmachine: Using API Version  1
	I0422 11:14:17.166604   33048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:17.166891   33048 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:17.167080   33048 main.go:141] libmachine: (ha-821265-m02) Calling .GetState
	I0422 11:14:17.168655   33048 status.go:330] ha-821265-m02 host status = "Running" (err=<nil>)
	I0422 11:14:17.168670   33048 host.go:66] Checking if "ha-821265-m02" exists ...
	I0422 11:14:17.169045   33048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:17.169088   33048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:17.183416   33048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37753
	I0422 11:14:17.183892   33048 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:17.184370   33048 main.go:141] libmachine: Using API Version  1
	I0422 11:14:17.184393   33048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:17.184713   33048 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:17.184908   33048 main.go:141] libmachine: (ha-821265-m02) Calling .GetIP
	I0422 11:14:17.187468   33048 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:14:17.187939   33048 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:14:17.187965   33048 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:14:17.188115   33048 host.go:66] Checking if "ha-821265-m02" exists ...
	I0422 11:14:17.188466   33048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:17.188494   33048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:17.202891   33048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35957
	I0422 11:14:17.203332   33048 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:17.203778   33048 main.go:141] libmachine: Using API Version  1
	I0422 11:14:17.203800   33048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:17.204081   33048 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:17.204295   33048 main.go:141] libmachine: (ha-821265-m02) Calling .DriverName
	I0422 11:14:17.204490   33048 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:14:17.204507   33048 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHHostname
	I0422 11:14:17.207345   33048 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:14:17.207759   33048 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:14:17.207779   33048 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:14:17.207962   33048 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHPort
	I0422 11:14:17.208136   33048 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:14:17.208298   33048 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHUsername
	I0422 11:14:17.208440   33048 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02/id_rsa Username:docker}
	W0422 11:14:20.264997   33048 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.39:22: connect: no route to host
	W0422 11:14:20.265110   33048 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	E0422 11:14:20.265133   33048 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	I0422 11:14:20.265154   33048 status.go:257] ha-821265-m02 status: &{Name:ha-821265-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0422 11:14:20.265185   33048 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.39:22: connect: no route to host
	I0422 11:14:20.265193   33048 status.go:255] checking status of ha-821265-m03 ...
	I0422 11:14:20.265592   33048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:20.265644   33048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:20.280865   33048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46479
	I0422 11:14:20.281358   33048 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:20.281823   33048 main.go:141] libmachine: Using API Version  1
	I0422 11:14:20.281848   33048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:20.282156   33048 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:20.282357   33048 main.go:141] libmachine: (ha-821265-m03) Calling .GetState
	I0422 11:14:20.283805   33048 status.go:330] ha-821265-m03 host status = "Running" (err=<nil>)
	I0422 11:14:20.283829   33048 host.go:66] Checking if "ha-821265-m03" exists ...
	I0422 11:14:20.284093   33048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:20.284115   33048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:20.299505   33048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46393
	I0422 11:14:20.299851   33048 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:20.300242   33048 main.go:141] libmachine: Using API Version  1
	I0422 11:14:20.300260   33048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:20.300553   33048 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:20.300715   33048 main.go:141] libmachine: (ha-821265-m03) Calling .GetIP
	I0422 11:14:20.303323   33048 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:14:20.303750   33048 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:14:20.303788   33048 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:14:20.303880   33048 host.go:66] Checking if "ha-821265-m03" exists ...
	I0422 11:14:20.304168   33048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:20.304212   33048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:20.318371   33048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36047
	I0422 11:14:20.318815   33048 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:20.319315   33048 main.go:141] libmachine: Using API Version  1
	I0422 11:14:20.319334   33048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:20.319620   33048 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:20.319791   33048 main.go:141] libmachine: (ha-821265-m03) Calling .DriverName
	I0422 11:14:20.319980   33048 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:14:20.320006   33048 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHHostname
	I0422 11:14:20.322738   33048 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:14:20.323158   33048 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:14:20.323175   33048 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:14:20.323326   33048 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHPort
	I0422 11:14:20.323520   33048 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:14:20.323669   33048 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHUsername
	I0422 11:14:20.323811   33048 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03/id_rsa Username:docker}
	I0422 11:14:20.401177   33048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:14:20.418947   33048 kubeconfig.go:125] found "ha-821265" server: "https://192.168.39.254:8443"
	I0422 11:14:20.418973   33048 api_server.go:166] Checking apiserver status ...
	I0422 11:14:20.419012   33048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 11:14:20.441547   33048 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1605/cgroup
	W0422 11:14:20.453316   33048 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1605/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 11:14:20.453365   33048 ssh_runner.go:195] Run: ls
	I0422 11:14:20.458592   33048 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 11:14:20.463168   33048 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 11:14:20.463193   33048 status.go:422] ha-821265-m03 apiserver status = Running (err=<nil>)
	I0422 11:14:20.463202   33048 status.go:257] ha-821265-m03 status: &{Name:ha-821265-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 11:14:20.463215   33048 status.go:255] checking status of ha-821265-m04 ...
	I0422 11:14:20.463504   33048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:20.463532   33048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:20.478242   33048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41267
	I0422 11:14:20.478645   33048 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:20.479133   33048 main.go:141] libmachine: Using API Version  1
	I0422 11:14:20.479162   33048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:20.479493   33048 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:20.479669   33048 main.go:141] libmachine: (ha-821265-m04) Calling .GetState
	I0422 11:14:20.481174   33048 status.go:330] ha-821265-m04 host status = "Running" (err=<nil>)
	I0422 11:14:20.481201   33048 host.go:66] Checking if "ha-821265-m04" exists ...
	I0422 11:14:20.481495   33048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:20.481528   33048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:20.495720   33048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33817
	I0422 11:14:20.496180   33048 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:20.496599   33048 main.go:141] libmachine: Using API Version  1
	I0422 11:14:20.496619   33048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:20.496951   33048 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:20.497110   33048 main.go:141] libmachine: (ha-821265-m04) Calling .GetIP
	I0422 11:14:20.499618   33048 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:14:20.500032   33048 main.go:141] libmachine: (ha-821265-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:86:f0", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:10:32 +0000 UTC Type:0 Mac:52:54:00:02:86:f0 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-821265-m04 Clientid:01:52:54:00:02:86:f0}
	I0422 11:14:20.500067   33048 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:14:20.500240   33048 host.go:66] Checking if "ha-821265-m04" exists ...
	I0422 11:14:20.500493   33048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:20.500524   33048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:20.514748   33048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40013
	I0422 11:14:20.515134   33048 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:20.515540   33048 main.go:141] libmachine: Using API Version  1
	I0422 11:14:20.515560   33048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:20.515783   33048 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:20.515907   33048 main.go:141] libmachine: (ha-821265-m04) Calling .DriverName
	I0422 11:14:20.516078   33048 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:14:20.516096   33048 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHHostname
	I0422 11:14:20.518990   33048 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:14:20.519439   33048 main.go:141] libmachine: (ha-821265-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:86:f0", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:10:32 +0000 UTC Type:0 Mac:52:54:00:02:86:f0 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-821265-m04 Clientid:01:52:54:00:02:86:f0}
	I0422 11:14:20.519481   33048 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:14:20.519627   33048 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHPort
	I0422 11:14:20.519904   33048 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHKeyPath
	I0422 11:14:20.520062   33048 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHUsername
	I0422 11:14:20.520216   33048 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m04/id_rsa Username:docker}
	I0422 11:14:20.601261   33048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:14:20.618816   33048 status.go:257] ha-821265-m04 status: &{Name:ha-821265-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-821265 status -v=7 --alsologtostderr: exit status 7 (659.273782ms)

                                                
                                                
-- stdout --
	ha-821265
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-821265-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-821265-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-821265-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 11:14:27.171862   33186 out.go:291] Setting OutFile to fd 1 ...
	I0422 11:14:27.171979   33186 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:14:27.172010   33186 out.go:304] Setting ErrFile to fd 2...
	I0422 11:14:27.172020   33186 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:14:27.172660   33186 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
	I0422 11:14:27.172962   33186 out.go:298] Setting JSON to false
	I0422 11:14:27.172990   33186 mustload.go:65] Loading cluster: ha-821265
	I0422 11:14:27.173141   33186 notify.go:220] Checking for updates...
	I0422 11:14:27.173579   33186 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:14:27.173605   33186 status.go:255] checking status of ha-821265 ...
	I0422 11:14:27.174071   33186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:27.174137   33186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:27.188556   33186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38823
	I0422 11:14:27.188975   33186 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:27.189650   33186 main.go:141] libmachine: Using API Version  1
	I0422 11:14:27.189680   33186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:27.190024   33186 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:27.190200   33186 main.go:141] libmachine: (ha-821265) Calling .GetState
	I0422 11:14:27.191859   33186 status.go:330] ha-821265 host status = "Running" (err=<nil>)
	I0422 11:14:27.191880   33186 host.go:66] Checking if "ha-821265" exists ...
	I0422 11:14:27.192157   33186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:27.192197   33186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:27.207720   33186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45847
	I0422 11:14:27.208096   33186 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:27.208515   33186 main.go:141] libmachine: Using API Version  1
	I0422 11:14:27.208539   33186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:27.208826   33186 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:27.209005   33186 main.go:141] libmachine: (ha-821265) Calling .GetIP
	I0422 11:14:27.212120   33186 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:14:27.212567   33186 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:14:27.212593   33186 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:14:27.212761   33186 host.go:66] Checking if "ha-821265" exists ...
	I0422 11:14:27.213159   33186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:27.213209   33186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:27.226737   33186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34701
	I0422 11:14:27.227112   33186 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:27.227524   33186 main.go:141] libmachine: Using API Version  1
	I0422 11:14:27.227549   33186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:27.227828   33186 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:27.228018   33186 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:14:27.228200   33186 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:14:27.228241   33186 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:14:27.231054   33186 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:14:27.231469   33186 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:14:27.231506   33186 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:14:27.231621   33186 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:14:27.231825   33186 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:14:27.231978   33186 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:14:27.232138   33186 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:14:27.320458   33186 ssh_runner.go:195] Run: systemctl --version
	I0422 11:14:27.328213   33186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:14:27.347510   33186 kubeconfig.go:125] found "ha-821265" server: "https://192.168.39.254:8443"
	I0422 11:14:27.347536   33186 api_server.go:166] Checking apiserver status ...
	I0422 11:14:27.347572   33186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 11:14:27.365459   33186 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup
	W0422 11:14:27.377605   33186 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 11:14:27.377649   33186 ssh_runner.go:195] Run: ls
	I0422 11:14:27.383214   33186 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 11:14:27.387738   33186 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 11:14:27.387759   33186 status.go:422] ha-821265 apiserver status = Running (err=<nil>)
	I0422 11:14:27.387771   33186 status.go:257] ha-821265 status: &{Name:ha-821265 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 11:14:27.387801   33186 status.go:255] checking status of ha-821265-m02 ...
	I0422 11:14:27.388093   33186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:27.388130   33186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:27.402632   33186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33769
	I0422 11:14:27.403084   33186 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:27.403540   33186 main.go:141] libmachine: Using API Version  1
	I0422 11:14:27.403564   33186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:27.403949   33186 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:27.404108   33186 main.go:141] libmachine: (ha-821265-m02) Calling .GetState
	I0422 11:14:27.405962   33186 status.go:330] ha-821265-m02 host status = "Stopped" (err=<nil>)
	I0422 11:14:27.405976   33186 status.go:343] host is not running, skipping remaining checks
	I0422 11:14:27.405982   33186 status.go:257] ha-821265-m02 status: &{Name:ha-821265-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 11:14:27.406007   33186 status.go:255] checking status of ha-821265-m03 ...
	I0422 11:14:27.406293   33186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:27.406325   33186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:27.420143   33186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40411
	I0422 11:14:27.420526   33186 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:27.421043   33186 main.go:141] libmachine: Using API Version  1
	I0422 11:14:27.421066   33186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:27.421383   33186 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:27.421539   33186 main.go:141] libmachine: (ha-821265-m03) Calling .GetState
	I0422 11:14:27.423146   33186 status.go:330] ha-821265-m03 host status = "Running" (err=<nil>)
	I0422 11:14:27.423164   33186 host.go:66] Checking if "ha-821265-m03" exists ...
	I0422 11:14:27.423539   33186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:27.423582   33186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:27.437322   33186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39007
	I0422 11:14:27.437716   33186 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:27.438165   33186 main.go:141] libmachine: Using API Version  1
	I0422 11:14:27.438185   33186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:27.438467   33186 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:27.438638   33186 main.go:141] libmachine: (ha-821265-m03) Calling .GetIP
	I0422 11:14:27.441958   33186 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:14:27.442436   33186 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:14:27.442453   33186 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:14:27.442578   33186 host.go:66] Checking if "ha-821265-m03" exists ...
	I0422 11:14:27.442967   33186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:27.443010   33186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:27.457730   33186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36311
	I0422 11:14:27.458186   33186 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:27.458717   33186 main.go:141] libmachine: Using API Version  1
	I0422 11:14:27.458737   33186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:27.459080   33186 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:27.459232   33186 main.go:141] libmachine: (ha-821265-m03) Calling .DriverName
	I0422 11:14:27.459469   33186 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:14:27.459506   33186 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHHostname
	I0422 11:14:27.462167   33186 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:14:27.462618   33186 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:14:27.462649   33186 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:14:27.462918   33186 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHPort
	I0422 11:14:27.463078   33186 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:14:27.463211   33186 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHUsername
	I0422 11:14:27.463355   33186 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03/id_rsa Username:docker}
	I0422 11:14:27.546108   33186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:14:27.565762   33186 kubeconfig.go:125] found "ha-821265" server: "https://192.168.39.254:8443"
	I0422 11:14:27.565786   33186 api_server.go:166] Checking apiserver status ...
	I0422 11:14:27.565822   33186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 11:14:27.583110   33186 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1605/cgroup
	W0422 11:14:27.594657   33186 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1605/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 11:14:27.594711   33186 ssh_runner.go:195] Run: ls
	I0422 11:14:27.599925   33186 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 11:14:27.612320   33186 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 11:14:27.612344   33186 status.go:422] ha-821265-m03 apiserver status = Running (err=<nil>)
	I0422 11:14:27.612352   33186 status.go:257] ha-821265-m03 status: &{Name:ha-821265-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 11:14:27.612365   33186 status.go:255] checking status of ha-821265-m04 ...
	I0422 11:14:27.612694   33186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:27.612727   33186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:27.629165   33186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42197
	I0422 11:14:27.629640   33186 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:27.630227   33186 main.go:141] libmachine: Using API Version  1
	I0422 11:14:27.630246   33186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:27.630544   33186 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:27.630731   33186 main.go:141] libmachine: (ha-821265-m04) Calling .GetState
	I0422 11:14:27.632405   33186 status.go:330] ha-821265-m04 host status = "Running" (err=<nil>)
	I0422 11:14:27.632420   33186 host.go:66] Checking if "ha-821265-m04" exists ...
	I0422 11:14:27.632679   33186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:27.632713   33186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:27.646918   33186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36191
	I0422 11:14:27.647257   33186 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:27.647692   33186 main.go:141] libmachine: Using API Version  1
	I0422 11:14:27.647716   33186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:27.648056   33186 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:27.648262   33186 main.go:141] libmachine: (ha-821265-m04) Calling .GetIP
	I0422 11:14:27.651047   33186 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:14:27.651496   33186 main.go:141] libmachine: (ha-821265-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:86:f0", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:10:32 +0000 UTC Type:0 Mac:52:54:00:02:86:f0 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-821265-m04 Clientid:01:52:54:00:02:86:f0}
	I0422 11:14:27.651515   33186 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:14:27.651664   33186 host.go:66] Checking if "ha-821265-m04" exists ...
	I0422 11:14:27.652077   33186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:27.652126   33186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:27.666964   33186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40367
	I0422 11:14:27.667444   33186 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:27.668032   33186 main.go:141] libmachine: Using API Version  1
	I0422 11:14:27.668052   33186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:27.668373   33186 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:27.668565   33186 main.go:141] libmachine: (ha-821265-m04) Calling .DriverName
	I0422 11:14:27.668810   33186 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:14:27.668829   33186 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHHostname
	I0422 11:14:27.671668   33186 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:14:27.672173   33186 main.go:141] libmachine: (ha-821265-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:86:f0", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:10:32 +0000 UTC Type:0 Mac:52:54:00:02:86:f0 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-821265-m04 Clientid:01:52:54:00:02:86:f0}
	I0422 11:14:27.672199   33186 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:14:27.672344   33186 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHPort
	I0422 11:14:27.672539   33186 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHKeyPath
	I0422 11:14:27.672667   33186 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHUsername
	I0422 11:14:27.672846   33186 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m04/id_rsa Username:docker}
	I0422 11:14:27.757791   33186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:14:27.775361   33186 status.go:257] ha-821265-m04 status: &{Name:ha-821265-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-821265 status -v=7 --alsologtostderr: exit status 7 (676.172043ms)

                                                
                                                
-- stdout --
	ha-821265
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-821265-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-821265-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-821265-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 11:14:36.110630   33290 out.go:291] Setting OutFile to fd 1 ...
	I0422 11:14:36.110743   33290 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:14:36.110754   33290 out.go:304] Setting ErrFile to fd 2...
	I0422 11:14:36.110761   33290 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:14:36.110968   33290 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
	I0422 11:14:36.111133   33290 out.go:298] Setting JSON to false
	I0422 11:14:36.111158   33290 mustload.go:65] Loading cluster: ha-821265
	I0422 11:14:36.111212   33290 notify.go:220] Checking for updates...
	I0422 11:14:36.111684   33290 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:14:36.111705   33290 status.go:255] checking status of ha-821265 ...
	I0422 11:14:36.112140   33290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:36.112201   33290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:36.130362   33290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46877
	I0422 11:14:36.130772   33290 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:36.131333   33290 main.go:141] libmachine: Using API Version  1
	I0422 11:14:36.131357   33290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:36.131677   33290 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:36.131856   33290 main.go:141] libmachine: (ha-821265) Calling .GetState
	I0422 11:14:36.133636   33290 status.go:330] ha-821265 host status = "Running" (err=<nil>)
	I0422 11:14:36.133662   33290 host.go:66] Checking if "ha-821265" exists ...
	I0422 11:14:36.133942   33290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:36.133990   33290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:36.148844   33290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34835
	I0422 11:14:36.149300   33290 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:36.149767   33290 main.go:141] libmachine: Using API Version  1
	I0422 11:14:36.149789   33290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:36.150121   33290 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:36.150275   33290 main.go:141] libmachine: (ha-821265) Calling .GetIP
	I0422 11:14:36.153360   33290 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:14:36.153797   33290 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:14:36.153831   33290 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:14:36.153937   33290 host.go:66] Checking if "ha-821265" exists ...
	I0422 11:14:36.154367   33290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:36.154425   33290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:36.170093   33290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44017
	I0422 11:14:36.170490   33290 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:36.170976   33290 main.go:141] libmachine: Using API Version  1
	I0422 11:14:36.170997   33290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:36.171337   33290 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:36.171539   33290 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:14:36.171750   33290 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:14:36.171778   33290 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:14:36.174785   33290 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:14:36.175303   33290 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:14:36.175336   33290 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:14:36.175504   33290 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:14:36.175679   33290 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:14:36.175817   33290 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:14:36.175955   33290 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:14:36.258767   33290 ssh_runner.go:195] Run: systemctl --version
	I0422 11:14:36.267338   33290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:14:36.288969   33290 kubeconfig.go:125] found "ha-821265" server: "https://192.168.39.254:8443"
	I0422 11:14:36.289003   33290 api_server.go:166] Checking apiserver status ...
	I0422 11:14:36.289049   33290 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 11:14:36.307997   33290 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup
	W0422 11:14:36.320798   33290 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 11:14:36.320854   33290 ssh_runner.go:195] Run: ls
	I0422 11:14:36.326010   33290 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 11:14:36.332633   33290 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 11:14:36.332657   33290 status.go:422] ha-821265 apiserver status = Running (err=<nil>)
	I0422 11:14:36.332668   33290 status.go:257] ha-821265 status: &{Name:ha-821265 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 11:14:36.332683   33290 status.go:255] checking status of ha-821265-m02 ...
	I0422 11:14:36.333050   33290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:36.333098   33290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:36.347345   33290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41275
	I0422 11:14:36.347796   33290 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:36.348243   33290 main.go:141] libmachine: Using API Version  1
	I0422 11:14:36.348263   33290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:36.348560   33290 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:36.348757   33290 main.go:141] libmachine: (ha-821265-m02) Calling .GetState
	I0422 11:14:36.350391   33290 status.go:330] ha-821265-m02 host status = "Stopped" (err=<nil>)
	I0422 11:14:36.350401   33290 status.go:343] host is not running, skipping remaining checks
	I0422 11:14:36.350407   33290 status.go:257] ha-821265-m02 status: &{Name:ha-821265-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 11:14:36.350419   33290 status.go:255] checking status of ha-821265-m03 ...
	I0422 11:14:36.350694   33290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:36.350743   33290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:36.365577   33290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37037
	I0422 11:14:36.366001   33290 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:36.366544   33290 main.go:141] libmachine: Using API Version  1
	I0422 11:14:36.366565   33290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:36.366924   33290 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:36.367150   33290 main.go:141] libmachine: (ha-821265-m03) Calling .GetState
	I0422 11:14:36.368716   33290 status.go:330] ha-821265-m03 host status = "Running" (err=<nil>)
	I0422 11:14:36.368733   33290 host.go:66] Checking if "ha-821265-m03" exists ...
	I0422 11:14:36.369163   33290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:36.369199   33290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:36.383535   33290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33235
	I0422 11:14:36.383889   33290 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:36.384304   33290 main.go:141] libmachine: Using API Version  1
	I0422 11:14:36.384327   33290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:36.384672   33290 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:36.384877   33290 main.go:141] libmachine: (ha-821265-m03) Calling .GetIP
	I0422 11:14:36.387647   33290 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:14:36.387998   33290 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:14:36.388019   33290 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:14:36.388142   33290 host.go:66] Checking if "ha-821265-m03" exists ...
	I0422 11:14:36.388489   33290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:36.388535   33290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:36.402692   33290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41957
	I0422 11:14:36.403150   33290 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:36.403596   33290 main.go:141] libmachine: Using API Version  1
	I0422 11:14:36.403618   33290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:36.403875   33290 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:36.404027   33290 main.go:141] libmachine: (ha-821265-m03) Calling .DriverName
	I0422 11:14:36.404218   33290 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:14:36.404242   33290 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHHostname
	I0422 11:14:36.406887   33290 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:14:36.407373   33290 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:14:36.407400   33290 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:14:36.407588   33290 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHPort
	I0422 11:14:36.407761   33290 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:14:36.407899   33290 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHUsername
	I0422 11:14:36.408021   33290 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03/id_rsa Username:docker}
	I0422 11:14:36.494665   33290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:14:36.514731   33290 kubeconfig.go:125] found "ha-821265" server: "https://192.168.39.254:8443"
	I0422 11:14:36.514761   33290 api_server.go:166] Checking apiserver status ...
	I0422 11:14:36.514811   33290 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 11:14:36.532920   33290 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1605/cgroup
	W0422 11:14:36.544440   33290 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1605/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 11:14:36.544497   33290 ssh_runner.go:195] Run: ls
	I0422 11:14:36.550081   33290 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 11:14:36.564218   33290 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 11:14:36.564248   33290 status.go:422] ha-821265-m03 apiserver status = Running (err=<nil>)
	I0422 11:14:36.564259   33290 status.go:257] ha-821265-m03 status: &{Name:ha-821265-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 11:14:36.564280   33290 status.go:255] checking status of ha-821265-m04 ...
	I0422 11:14:36.564683   33290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:36.564729   33290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:36.579197   33290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42587
	I0422 11:14:36.579637   33290 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:36.580122   33290 main.go:141] libmachine: Using API Version  1
	I0422 11:14:36.580145   33290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:36.580442   33290 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:36.580626   33290 main.go:141] libmachine: (ha-821265-m04) Calling .GetState
	I0422 11:14:36.582118   33290 status.go:330] ha-821265-m04 host status = "Running" (err=<nil>)
	I0422 11:14:36.582136   33290 host.go:66] Checking if "ha-821265-m04" exists ...
	I0422 11:14:36.582412   33290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:36.582448   33290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:36.596859   33290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34907
	I0422 11:14:36.597355   33290 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:36.597859   33290 main.go:141] libmachine: Using API Version  1
	I0422 11:14:36.597873   33290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:36.598214   33290 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:36.598366   33290 main.go:141] libmachine: (ha-821265-m04) Calling .GetIP
	I0422 11:14:36.601292   33290 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:14:36.601728   33290 main.go:141] libmachine: (ha-821265-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:86:f0", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:10:32 +0000 UTC Type:0 Mac:52:54:00:02:86:f0 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-821265-m04 Clientid:01:52:54:00:02:86:f0}
	I0422 11:14:36.601754   33290 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:14:36.602092   33290 host.go:66] Checking if "ha-821265-m04" exists ...
	I0422 11:14:36.602494   33290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:36.602540   33290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:36.617298   33290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36009
	I0422 11:14:36.617741   33290 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:36.618287   33290 main.go:141] libmachine: Using API Version  1
	I0422 11:14:36.618312   33290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:36.618611   33290 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:36.618781   33290 main.go:141] libmachine: (ha-821265-m04) Calling .DriverName
	I0422 11:14:36.618990   33290 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:14:36.619010   33290 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHHostname
	I0422 11:14:36.622036   33290 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:14:36.622512   33290 main.go:141] libmachine: (ha-821265-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:86:f0", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:10:32 +0000 UTC Type:0 Mac:52:54:00:02:86:f0 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-821265-m04 Clientid:01:52:54:00:02:86:f0}
	I0422 11:14:36.622529   33290 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:14:36.622673   33290 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHPort
	I0422 11:14:36.622818   33290 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHKeyPath
	I0422 11:14:36.622957   33290 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHUsername
	I0422 11:14:36.623086   33290 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m04/id_rsa Username:docker}
	I0422 11:14:36.711104   33290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:14:36.727483   33290 status.go:257] ha-821265-m04 status: &{Name:ha-821265-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-821265 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-821265 -n ha-821265
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-821265 logs -n 25: (1.593742909s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-821265 ssh -n                                                                 | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-821265 cp ha-821265-m03:/home/docker/cp-test.txt                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265:/home/docker/cp-test_ha-821265-m03_ha-821265.txt                       |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n                                                                 | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n ha-821265 sudo cat                                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | /home/docker/cp-test_ha-821265-m03_ha-821265.txt                                 |           |         |         |                     |                     |
	| cp      | ha-821265 cp ha-821265-m03:/home/docker/cp-test.txt                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m02:/home/docker/cp-test_ha-821265-m03_ha-821265-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n                                                                 | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n ha-821265-m02 sudo cat                                          | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | /home/docker/cp-test_ha-821265-m03_ha-821265-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-821265 cp ha-821265-m03:/home/docker/cp-test.txt                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m04:/home/docker/cp-test_ha-821265-m03_ha-821265-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n                                                                 | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n ha-821265-m04 sudo cat                                          | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | /home/docker/cp-test_ha-821265-m03_ha-821265-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-821265 cp testdata/cp-test.txt                                                | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n                                                                 | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-821265 cp ha-821265-m04:/home/docker/cp-test.txt                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1102049705/001/cp-test_ha-821265-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n                                                                 | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-821265 cp ha-821265-m04:/home/docker/cp-test.txt                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265:/home/docker/cp-test_ha-821265-m04_ha-821265.txt                       |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n                                                                 | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n ha-821265 sudo cat                                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | /home/docker/cp-test_ha-821265-m04_ha-821265.txt                                 |           |         |         |                     |                     |
	| cp      | ha-821265 cp ha-821265-m04:/home/docker/cp-test.txt                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m02:/home/docker/cp-test_ha-821265-m04_ha-821265-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n                                                                 | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n ha-821265-m02 sudo cat                                          | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | /home/docker/cp-test_ha-821265-m04_ha-821265-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-821265 cp ha-821265-m04:/home/docker/cp-test.txt                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m03:/home/docker/cp-test_ha-821265-m04_ha-821265-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n                                                                 | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n ha-821265-m03 sudo cat                                          | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | /home/docker/cp-test_ha-821265-m04_ha-821265-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-821265 node stop m02 -v=7                                                     | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-821265 node start m02 -v=7                                                    | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:13 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 11:06:36
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 11:06:36.919621   27717 out.go:291] Setting OutFile to fd 1 ...
	I0422 11:06:36.919762   27717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:06:36.919772   27717 out.go:304] Setting ErrFile to fd 2...
	I0422 11:06:36.919776   27717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:06:36.920011   27717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
	I0422 11:06:36.920598   27717 out.go:298] Setting JSON to false
	I0422 11:06:36.921508   27717 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":2940,"bootTime":1713781057,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 11:06:36.921564   27717 start.go:139] virtualization: kvm guest
	I0422 11:06:36.924070   27717 out.go:177] * [ha-821265] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 11:06:36.925731   27717 out.go:177]   - MINIKUBE_LOCATION=18711
	I0422 11:06:36.925754   27717 notify.go:220] Checking for updates...
	I0422 11:06:36.927327   27717 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 11:06:36.929125   27717 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18711-7633/kubeconfig
	I0422 11:06:36.930866   27717 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 11:06:36.932528   27717 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 11:06:36.933849   27717 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 11:06:36.935461   27717 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 11:06:36.970577   27717 out.go:177] * Using the kvm2 driver based on user configuration
	I0422 11:06:36.971929   27717 start.go:297] selected driver: kvm2
	I0422 11:06:36.971944   27717 start.go:901] validating driver "kvm2" against <nil>
	I0422 11:06:36.971968   27717 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 11:06:36.972628   27717 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 11:06:36.972698   27717 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18711-7633/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 11:06:36.987477   27717 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 11:06:36.987571   27717 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0422 11:06:36.987822   27717 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 11:06:36.987880   27717 cni.go:84] Creating CNI manager for ""
	I0422 11:06:36.987892   27717 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0422 11:06:36.987899   27717 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0422 11:06:36.987951   27717 start.go:340] cluster config:
	{Name:ha-821265 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-821265 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0422 11:06:36.988054   27717 iso.go:125] acquiring lock: {Name:mkb6ac9fd17ffabc92a94047094130aad6203a95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 11:06:36.990053   27717 out.go:177] * Starting "ha-821265" primary control-plane node in "ha-821265" cluster
	I0422 11:06:36.991343   27717 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 11:06:36.991387   27717 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0422 11:06:36.991394   27717 cache.go:56] Caching tarball of preloaded images
	I0422 11:06:36.991465   27717 preload.go:173] Found /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 11:06:36.991475   27717 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 11:06:36.991772   27717 profile.go:143] Saving config to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/config.json ...
	I0422 11:06:36.991791   27717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/config.json: {Name:mk1d94c9e38faf6fed2be29eb597dfabf13d6e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:06:36.991917   27717 start.go:360] acquireMachinesLock for ha-821265: {Name:mk5cb9b294e703b264c1f97ac968ffd01e93b576 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 11:06:36.991945   27717 start.go:364] duration metric: took 14.45µs to acquireMachinesLock for "ha-821265"
	I0422 11:06:36.991960   27717 start.go:93] Provisioning new machine with config: &{Name:ha-821265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:h
a-821265 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 11:06:36.992013   27717 start.go:125] createHost starting for "" (driver="kvm2")
	I0422 11:06:36.993682   27717 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0422 11:06:36.993801   27717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:06:36.993839   27717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:06:37.007885   27717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44193
	I0422 11:06:37.008312   27717 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:06:37.008926   27717 main.go:141] libmachine: Using API Version  1
	I0422 11:06:37.008958   27717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:06:37.009325   27717 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:06:37.009545   27717 main.go:141] libmachine: (ha-821265) Calling .GetMachineName
	I0422 11:06:37.009729   27717 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:06:37.009882   27717 start.go:159] libmachine.API.Create for "ha-821265" (driver="kvm2")
	I0422 11:06:37.009910   27717 client.go:168] LocalClient.Create starting
	I0422 11:06:37.009945   27717 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem
	I0422 11:06:37.009987   27717 main.go:141] libmachine: Decoding PEM data...
	I0422 11:06:37.010001   27717 main.go:141] libmachine: Parsing certificate...
	I0422 11:06:37.010050   27717 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem
	I0422 11:06:37.010067   27717 main.go:141] libmachine: Decoding PEM data...
	I0422 11:06:37.010079   27717 main.go:141] libmachine: Parsing certificate...
	I0422 11:06:37.010092   27717 main.go:141] libmachine: Running pre-create checks...
	I0422 11:06:37.010100   27717 main.go:141] libmachine: (ha-821265) Calling .PreCreateCheck
	I0422 11:06:37.010493   27717 main.go:141] libmachine: (ha-821265) Calling .GetConfigRaw
	I0422 11:06:37.010914   27717 main.go:141] libmachine: Creating machine...
	I0422 11:06:37.010927   27717 main.go:141] libmachine: (ha-821265) Calling .Create
	I0422 11:06:37.011077   27717 main.go:141] libmachine: (ha-821265) Creating KVM machine...
	I0422 11:06:37.012339   27717 main.go:141] libmachine: (ha-821265) DBG | found existing default KVM network
	I0422 11:06:37.012967   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:37.012822   27741 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0422 11:06:37.012990   27717 main.go:141] libmachine: (ha-821265) DBG | created network xml: 
	I0422 11:06:37.012999   27717 main.go:141] libmachine: (ha-821265) DBG | <network>
	I0422 11:06:37.013004   27717 main.go:141] libmachine: (ha-821265) DBG |   <name>mk-ha-821265</name>
	I0422 11:06:37.013010   27717 main.go:141] libmachine: (ha-821265) DBG |   <dns enable='no'/>
	I0422 11:06:37.013020   27717 main.go:141] libmachine: (ha-821265) DBG |   
	I0422 11:06:37.013029   27717 main.go:141] libmachine: (ha-821265) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0422 11:06:37.013034   27717 main.go:141] libmachine: (ha-821265) DBG |     <dhcp>
	I0422 11:06:37.013043   27717 main.go:141] libmachine: (ha-821265) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0422 11:06:37.013048   27717 main.go:141] libmachine: (ha-821265) DBG |     </dhcp>
	I0422 11:06:37.013054   27717 main.go:141] libmachine: (ha-821265) DBG |   </ip>
	I0422 11:06:37.013059   27717 main.go:141] libmachine: (ha-821265) DBG |   
	I0422 11:06:37.013064   27717 main.go:141] libmachine: (ha-821265) DBG | </network>
	I0422 11:06:37.013071   27717 main.go:141] libmachine: (ha-821265) DBG | 
	I0422 11:06:37.018249   27717 main.go:141] libmachine: (ha-821265) DBG | trying to create private KVM network mk-ha-821265 192.168.39.0/24...
	I0422 11:06:37.082455   27717 main.go:141] libmachine: (ha-821265) DBG | private KVM network mk-ha-821265 192.168.39.0/24 created
	I0422 11:06:37.082482   27717 main.go:141] libmachine: (ha-821265) Setting up store path in /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265 ...
	I0422 11:06:37.082496   27717 main.go:141] libmachine: (ha-821265) Building disk image from file:///home/jenkins/minikube-integration/18711-7633/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0422 11:06:37.082576   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:37.082503   27741 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 11:06:37.082767   27717 main.go:141] libmachine: (ha-821265) Downloading /home/jenkins/minikube-integration/18711-7633/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18711-7633/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0422 11:06:37.315869   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:37.315744   27741 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa...
	I0422 11:06:37.473307   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:37.473180   27741 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/ha-821265.rawdisk...
	I0422 11:06:37.473340   27717 main.go:141] libmachine: (ha-821265) DBG | Writing magic tar header
	I0422 11:06:37.473354   27717 main.go:141] libmachine: (ha-821265) DBG | Writing SSH key tar header
	I0422 11:06:37.473371   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:37.473325   27741 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265 ...
	I0422 11:06:37.473528   27717 main.go:141] libmachine: (ha-821265) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265
	I0422 11:06:37.473562   27717 main.go:141] libmachine: (ha-821265) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265 (perms=drwx------)
	I0422 11:06:37.473570   27717 main.go:141] libmachine: (ha-821265) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633/.minikube/machines
	I0422 11:06:37.473577   27717 main.go:141] libmachine: (ha-821265) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633/.minikube/machines (perms=drwxr-xr-x)
	I0422 11:06:37.473587   27717 main.go:141] libmachine: (ha-821265) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 11:06:37.473605   27717 main.go:141] libmachine: (ha-821265) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633
	I0422 11:06:37.473614   27717 main.go:141] libmachine: (ha-821265) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0422 11:06:37.473626   27717 main.go:141] libmachine: (ha-821265) DBG | Checking permissions on dir: /home/jenkins
	I0422 11:06:37.473631   27717 main.go:141] libmachine: (ha-821265) DBG | Checking permissions on dir: /home
	I0422 11:06:37.473639   27717 main.go:141] libmachine: (ha-821265) DBG | Skipping /home - not owner
	I0422 11:06:37.473648   27717 main.go:141] libmachine: (ha-821265) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633/.minikube (perms=drwxr-xr-x)
	I0422 11:06:37.473655   27717 main.go:141] libmachine: (ha-821265) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633 (perms=drwxrwxr-x)
	I0422 11:06:37.473675   27717 main.go:141] libmachine: (ha-821265) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0422 11:06:37.473685   27717 main.go:141] libmachine: (ha-821265) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0422 11:06:37.473703   27717 main.go:141] libmachine: (ha-821265) Creating domain...
	I0422 11:06:37.474882   27717 main.go:141] libmachine: (ha-821265) define libvirt domain using xml: 
	I0422 11:06:37.474908   27717 main.go:141] libmachine: (ha-821265) <domain type='kvm'>
	I0422 11:06:37.474915   27717 main.go:141] libmachine: (ha-821265)   <name>ha-821265</name>
	I0422 11:06:37.474925   27717 main.go:141] libmachine: (ha-821265)   <memory unit='MiB'>2200</memory>
	I0422 11:06:37.474932   27717 main.go:141] libmachine: (ha-821265)   <vcpu>2</vcpu>
	I0422 11:06:37.474936   27717 main.go:141] libmachine: (ha-821265)   <features>
	I0422 11:06:37.474941   27717 main.go:141] libmachine: (ha-821265)     <acpi/>
	I0422 11:06:37.474948   27717 main.go:141] libmachine: (ha-821265)     <apic/>
	I0422 11:06:37.474953   27717 main.go:141] libmachine: (ha-821265)     <pae/>
	I0422 11:06:37.474964   27717 main.go:141] libmachine: (ha-821265)     
	I0422 11:06:37.474968   27717 main.go:141] libmachine: (ha-821265)   </features>
	I0422 11:06:37.474973   27717 main.go:141] libmachine: (ha-821265)   <cpu mode='host-passthrough'>
	I0422 11:06:37.474979   27717 main.go:141] libmachine: (ha-821265)   
	I0422 11:06:37.474986   27717 main.go:141] libmachine: (ha-821265)   </cpu>
	I0422 11:06:37.475011   27717 main.go:141] libmachine: (ha-821265)   <os>
	I0422 11:06:37.475040   27717 main.go:141] libmachine: (ha-821265)     <type>hvm</type>
	I0422 11:06:37.475118   27717 main.go:141] libmachine: (ha-821265)     <boot dev='cdrom'/>
	I0422 11:06:37.475145   27717 main.go:141] libmachine: (ha-821265)     <boot dev='hd'/>
	I0422 11:06:37.475155   27717 main.go:141] libmachine: (ha-821265)     <bootmenu enable='no'/>
	I0422 11:06:37.475164   27717 main.go:141] libmachine: (ha-821265)   </os>
	I0422 11:06:37.475180   27717 main.go:141] libmachine: (ha-821265)   <devices>
	I0422 11:06:37.475195   27717 main.go:141] libmachine: (ha-821265)     <disk type='file' device='cdrom'>
	I0422 11:06:37.475208   27717 main.go:141] libmachine: (ha-821265)       <source file='/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/boot2docker.iso'/>
	I0422 11:06:37.475240   27717 main.go:141] libmachine: (ha-821265)       <target dev='hdc' bus='scsi'/>
	I0422 11:06:37.475245   27717 main.go:141] libmachine: (ha-821265)       <readonly/>
	I0422 11:06:37.475252   27717 main.go:141] libmachine: (ha-821265)     </disk>
	I0422 11:06:37.475259   27717 main.go:141] libmachine: (ha-821265)     <disk type='file' device='disk'>
	I0422 11:06:37.475270   27717 main.go:141] libmachine: (ha-821265)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0422 11:06:37.475292   27717 main.go:141] libmachine: (ha-821265)       <source file='/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/ha-821265.rawdisk'/>
	I0422 11:06:37.475311   27717 main.go:141] libmachine: (ha-821265)       <target dev='hda' bus='virtio'/>
	I0422 11:06:37.475324   27717 main.go:141] libmachine: (ha-821265)     </disk>
	I0422 11:06:37.475338   27717 main.go:141] libmachine: (ha-821265)     <interface type='network'>
	I0422 11:06:37.475353   27717 main.go:141] libmachine: (ha-821265)       <source network='mk-ha-821265'/>
	I0422 11:06:37.475366   27717 main.go:141] libmachine: (ha-821265)       <model type='virtio'/>
	I0422 11:06:37.475380   27717 main.go:141] libmachine: (ha-821265)     </interface>
	I0422 11:06:37.475397   27717 main.go:141] libmachine: (ha-821265)     <interface type='network'>
	I0422 11:06:37.475410   27717 main.go:141] libmachine: (ha-821265)       <source network='default'/>
	I0422 11:06:37.475421   27717 main.go:141] libmachine: (ha-821265)       <model type='virtio'/>
	I0422 11:06:37.475436   27717 main.go:141] libmachine: (ha-821265)     </interface>
	I0422 11:06:37.475449   27717 main.go:141] libmachine: (ha-821265)     <serial type='pty'>
	I0422 11:06:37.475473   27717 main.go:141] libmachine: (ha-821265)       <target port='0'/>
	I0422 11:06:37.475489   27717 main.go:141] libmachine: (ha-821265)     </serial>
	I0422 11:06:37.475508   27717 main.go:141] libmachine: (ha-821265)     <console type='pty'>
	I0422 11:06:37.475525   27717 main.go:141] libmachine: (ha-821265)       <target type='serial' port='0'/>
	I0422 11:06:37.475540   27717 main.go:141] libmachine: (ha-821265)     </console>
	I0422 11:06:37.475550   27717 main.go:141] libmachine: (ha-821265)     <rng model='virtio'>
	I0422 11:06:37.475562   27717 main.go:141] libmachine: (ha-821265)       <backend model='random'>/dev/random</backend>
	I0422 11:06:37.475568   27717 main.go:141] libmachine: (ha-821265)     </rng>
	I0422 11:06:37.475573   27717 main.go:141] libmachine: (ha-821265)     
	I0422 11:06:37.475579   27717 main.go:141] libmachine: (ha-821265)     
	I0422 11:06:37.475584   27717 main.go:141] libmachine: (ha-821265)   </devices>
	I0422 11:06:37.475590   27717 main.go:141] libmachine: (ha-821265) </domain>
	I0422 11:06:37.475604   27717 main.go:141] libmachine: (ha-821265) 
	I0422 11:06:37.479726   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:97:60:69 in network default
	I0422 11:06:37.480316   27717 main.go:141] libmachine: (ha-821265) Ensuring networks are active...
	I0422 11:06:37.480339   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:06:37.480961   27717 main.go:141] libmachine: (ha-821265) Ensuring network default is active
	I0422 11:06:37.481262   27717 main.go:141] libmachine: (ha-821265) Ensuring network mk-ha-821265 is active
	I0422 11:06:37.481907   27717 main.go:141] libmachine: (ha-821265) Getting domain xml...
	I0422 11:06:37.482822   27717 main.go:141] libmachine: (ha-821265) Creating domain...
	I0422 11:06:38.657377   27717 main.go:141] libmachine: (ha-821265) Waiting to get IP...
	I0422 11:06:38.658275   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:06:38.658715   27717 main.go:141] libmachine: (ha-821265) DBG | unable to find current IP address of domain ha-821265 in network mk-ha-821265
	I0422 11:06:38.658737   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:38.658676   27741 retry.go:31] will retry after 211.485012ms: waiting for machine to come up
	I0422 11:06:38.872231   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:06:38.872917   27717 main.go:141] libmachine: (ha-821265) DBG | unable to find current IP address of domain ha-821265 in network mk-ha-821265
	I0422 11:06:38.872945   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:38.872884   27741 retry.go:31] will retry after 241.351108ms: waiting for machine to come up
	I0422 11:06:39.116484   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:06:39.116967   27717 main.go:141] libmachine: (ha-821265) DBG | unable to find current IP address of domain ha-821265 in network mk-ha-821265
	I0422 11:06:39.117000   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:39.116940   27741 retry.go:31] will retry after 389.175984ms: waiting for machine to come up
	I0422 11:06:39.507595   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:06:39.508169   27717 main.go:141] libmachine: (ha-821265) DBG | unable to find current IP address of domain ha-821265 in network mk-ha-821265
	I0422 11:06:39.508210   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:39.508121   27741 retry.go:31] will retry after 609.240168ms: waiting for machine to come up
	I0422 11:06:40.118900   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:06:40.119459   27717 main.go:141] libmachine: (ha-821265) DBG | unable to find current IP address of domain ha-821265 in network mk-ha-821265
	I0422 11:06:40.119484   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:40.119402   27741 retry.go:31] will retry after 555.876003ms: waiting for machine to come up
	I0422 11:06:40.677408   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:06:40.677839   27717 main.go:141] libmachine: (ha-821265) DBG | unable to find current IP address of domain ha-821265 in network mk-ha-821265
	I0422 11:06:40.677871   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:40.677811   27741 retry.go:31] will retry after 871.14358ms: waiting for machine to come up
	I0422 11:06:41.550850   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:06:41.551347   27717 main.go:141] libmachine: (ha-821265) DBG | unable to find current IP address of domain ha-821265 in network mk-ha-821265
	I0422 11:06:41.551387   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:41.551291   27741 retry.go:31] will retry after 844.675065ms: waiting for machine to come up
	I0422 11:06:42.398045   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:06:42.398907   27717 main.go:141] libmachine: (ha-821265) DBG | unable to find current IP address of domain ha-821265 in network mk-ha-821265
	I0422 11:06:42.398927   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:42.398861   27741 retry.go:31] will retry after 1.2788083s: waiting for machine to come up
	I0422 11:06:43.679116   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:06:43.679655   27717 main.go:141] libmachine: (ha-821265) DBG | unable to find current IP address of domain ha-821265 in network mk-ha-821265
	I0422 11:06:43.679678   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:43.679614   27741 retry.go:31] will retry after 1.645587291s: waiting for machine to come up
	I0422 11:06:45.327170   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:06:45.327642   27717 main.go:141] libmachine: (ha-821265) DBG | unable to find current IP address of domain ha-821265 in network mk-ha-821265
	I0422 11:06:45.327673   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:45.327582   27741 retry.go:31] will retry after 2.226967378s: waiting for machine to come up
	I0422 11:06:47.556383   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:06:47.556947   27717 main.go:141] libmachine: (ha-821265) DBG | unable to find current IP address of domain ha-821265 in network mk-ha-821265
	I0422 11:06:47.556988   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:47.556898   27741 retry.go:31] will retry after 2.091166086s: waiting for machine to come up
	I0422 11:06:49.651078   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:06:49.651488   27717 main.go:141] libmachine: (ha-821265) DBG | unable to find current IP address of domain ha-821265 in network mk-ha-821265
	I0422 11:06:49.651511   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:49.651450   27741 retry.go:31] will retry after 2.605110739s: waiting for machine to come up
	I0422 11:06:52.257652   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:06:52.258160   27717 main.go:141] libmachine: (ha-821265) DBG | unable to find current IP address of domain ha-821265 in network mk-ha-821265
	I0422 11:06:52.258190   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:52.258110   27741 retry.go:31] will retry after 4.516549684s: waiting for machine to come up
	I0422 11:06:56.779760   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:06:56.780137   27717 main.go:141] libmachine: (ha-821265) DBG | unable to find current IP address of domain ha-821265 in network mk-ha-821265
	I0422 11:06:56.780164   27717 main.go:141] libmachine: (ha-821265) DBG | I0422 11:06:56.780091   27741 retry.go:31] will retry after 4.448627626s: waiting for machine to come up
	I0422 11:07:01.233713   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:01.234234   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has current primary IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:01.234260   27717 main.go:141] libmachine: (ha-821265) Found IP for machine: 192.168.39.150
	I0422 11:07:01.234273   27717 main.go:141] libmachine: (ha-821265) Reserving static IP address...
	I0422 11:07:01.234681   27717 main.go:141] libmachine: (ha-821265) DBG | unable to find host DHCP lease matching {name: "ha-821265", mac: "52:54:00:17:f6:ad", ip: "192.168.39.150"} in network mk-ha-821265
	I0422 11:07:01.307403   27717 main.go:141] libmachine: (ha-821265) Reserved static IP address: 192.168.39.150
	I0422 11:07:01.307430   27717 main.go:141] libmachine: (ha-821265) DBG | Getting to WaitForSSH function...
	I0422 11:07:01.307436   27717 main.go:141] libmachine: (ha-821265) Waiting for SSH to be available...
	I0422 11:07:01.309929   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:01.310292   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:minikube Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:01.310345   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:01.310399   27717 main.go:141] libmachine: (ha-821265) DBG | Using SSH client type: external
	I0422 11:07:01.310418   27717 main.go:141] libmachine: (ha-821265) DBG | Using SSH private key: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa (-rw-------)
	I0422 11:07:01.310442   27717 main.go:141] libmachine: (ha-821265) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.150 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 11:07:01.310453   27717 main.go:141] libmachine: (ha-821265) DBG | About to run SSH command:
	I0422 11:07:01.310464   27717 main.go:141] libmachine: (ha-821265) DBG | exit 0
	I0422 11:07:01.433232   27717 main.go:141] libmachine: (ha-821265) DBG | SSH cmd err, output: <nil>: 
	I0422 11:07:01.433519   27717 main.go:141] libmachine: (ha-821265) KVM machine creation complete!
	I0422 11:07:01.433809   27717 main.go:141] libmachine: (ha-821265) Calling .GetConfigRaw
	I0422 11:07:01.434391   27717 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:07:01.434626   27717 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:07:01.434811   27717 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0422 11:07:01.434825   27717 main.go:141] libmachine: (ha-821265) Calling .GetState
	I0422 11:07:01.436050   27717 main.go:141] libmachine: Detecting operating system of created instance...
	I0422 11:07:01.436068   27717 main.go:141] libmachine: Waiting for SSH to be available...
	I0422 11:07:01.436076   27717 main.go:141] libmachine: Getting to WaitForSSH function...
	I0422 11:07:01.436085   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:07:01.438380   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:01.438805   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:01.438848   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:01.438944   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:07:01.439121   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:01.439270   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:01.439408   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:07:01.439539   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:07:01.439790   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0422 11:07:01.439804   27717 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0422 11:07:01.544455   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 11:07:01.544479   27717 main.go:141] libmachine: Detecting the provisioner...
	I0422 11:07:01.544486   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:07:01.546915   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:01.547250   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:01.547278   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:01.547470   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:07:01.547664   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:01.547824   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:01.547962   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:07:01.548112   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:07:01.548272   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0422 11:07:01.548288   27717 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0422 11:07:01.650193   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0422 11:07:01.650278   27717 main.go:141] libmachine: found compatible host: buildroot
	I0422 11:07:01.650290   27717 main.go:141] libmachine: Provisioning with buildroot...
	I0422 11:07:01.650297   27717 main.go:141] libmachine: (ha-821265) Calling .GetMachineName
	I0422 11:07:01.650556   27717 buildroot.go:166] provisioning hostname "ha-821265"
	I0422 11:07:01.650577   27717 main.go:141] libmachine: (ha-821265) Calling .GetMachineName
	I0422 11:07:01.650758   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:07:01.653134   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:01.653592   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:01.653639   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:01.653745   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:07:01.653930   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:01.654084   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:01.654218   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:07:01.654365   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:07:01.654559   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0422 11:07:01.654571   27717 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-821265 && echo "ha-821265" | sudo tee /etc/hostname
	I0422 11:07:01.772663   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-821265
	
	I0422 11:07:01.772688   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:07:01.775340   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:01.775659   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:01.775679   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:01.775818   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:07:01.776056   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:01.776210   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:01.776442   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:07:01.776604   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:07:01.776812   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0422 11:07:01.776835   27717 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-821265' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-821265/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-821265' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 11:07:01.892321   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 11:07:01.892350   27717 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18711-7633/.minikube CaCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18711-7633/.minikube}
	I0422 11:07:01.892389   27717 buildroot.go:174] setting up certificates
	I0422 11:07:01.892400   27717 provision.go:84] configureAuth start
	I0422 11:07:01.892411   27717 main.go:141] libmachine: (ha-821265) Calling .GetMachineName
	I0422 11:07:01.892751   27717 main.go:141] libmachine: (ha-821265) Calling .GetIP
	I0422 11:07:01.895459   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:01.895794   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:01.895817   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:01.895959   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:07:01.898184   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:01.898552   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:01.898586   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:01.898659   27717 provision.go:143] copyHostCerts
	I0422 11:07:01.898687   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem
	I0422 11:07:01.898718   27717 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem, removing ...
	I0422 11:07:01.898726   27717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem
	I0422 11:07:01.898799   27717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem (1123 bytes)
	I0422 11:07:01.898897   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem
	I0422 11:07:01.898919   27717 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem, removing ...
	I0422 11:07:01.898924   27717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem
	I0422 11:07:01.898951   27717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem (1679 bytes)
	I0422 11:07:01.899003   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem
	I0422 11:07:01.899019   27717 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem, removing ...
	I0422 11:07:01.899023   27717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem
	I0422 11:07:01.899043   27717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem (1078 bytes)
	I0422 11:07:01.899099   27717 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem org=jenkins.ha-821265 san=[127.0.0.1 192.168.39.150 ha-821265 localhost minikube]
	I0422 11:07:02.062780   27717 provision.go:177] copyRemoteCerts
	I0422 11:07:02.062837   27717 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 11:07:02.062858   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:07:02.065745   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.065962   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:02.065993   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.066162   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:07:02.066359   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:02.066480   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:07:02.066589   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:07:02.151942   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0422 11:07:02.152000   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 11:07:02.183472   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0422 11:07:02.183535   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 11:07:02.211692   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0422 11:07:02.211752   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0422 11:07:02.239699   27717 provision.go:87] duration metric: took 347.283555ms to configureAuth
	I0422 11:07:02.239773   27717 buildroot.go:189] setting minikube options for container-runtime
	I0422 11:07:02.239979   27717 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:07:02.240061   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:07:02.242574   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.243051   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:02.243079   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.243250   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:07:02.243385   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:02.243491   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:02.243634   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:07:02.243784   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:07:02.243942   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0422 11:07:02.243959   27717 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 11:07:02.531139   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 11:07:02.531181   27717 main.go:141] libmachine: Checking connection to Docker...
	I0422 11:07:02.531192   27717 main.go:141] libmachine: (ha-821265) Calling .GetURL
	I0422 11:07:02.532667   27717 main.go:141] libmachine: (ha-821265) DBG | Using libvirt version 6000000
	I0422 11:07:02.534749   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.535091   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:02.535122   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.535303   27717 main.go:141] libmachine: Docker is up and running!
	I0422 11:07:02.535317   27717 main.go:141] libmachine: Reticulating splines...
	I0422 11:07:02.535326   27717 client.go:171] duration metric: took 25.525404418s to LocalClient.Create
	I0422 11:07:02.535352   27717 start.go:167] duration metric: took 25.525468272s to libmachine.API.Create "ha-821265"
	I0422 11:07:02.535364   27717 start.go:293] postStartSetup for "ha-821265" (driver="kvm2")
	I0422 11:07:02.535378   27717 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 11:07:02.535399   27717 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:07:02.535670   27717 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 11:07:02.535716   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:07:02.538379   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.538870   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:02.538899   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.539053   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:07:02.539264   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:02.539395   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:07:02.539530   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:07:02.620393   27717 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 11:07:02.625633   27717 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 11:07:02.625662   27717 filesync.go:126] Scanning /home/jenkins/minikube-integration/18711-7633/.minikube/addons for local assets ...
	I0422 11:07:02.625722   27717 filesync.go:126] Scanning /home/jenkins/minikube-integration/18711-7633/.minikube/files for local assets ...
	I0422 11:07:02.625820   27717 filesync.go:149] local asset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> 149452.pem in /etc/ssl/certs
	I0422 11:07:02.625837   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> /etc/ssl/certs/149452.pem
	I0422 11:07:02.625958   27717 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 11:07:02.636656   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem --> /etc/ssl/certs/149452.pem (1708 bytes)
	I0422 11:07:02.664641   27717 start.go:296] duration metric: took 129.264119ms for postStartSetup
	I0422 11:07:02.664684   27717 main.go:141] libmachine: (ha-821265) Calling .GetConfigRaw
	I0422 11:07:02.665249   27717 main.go:141] libmachine: (ha-821265) Calling .GetIP
	I0422 11:07:02.668184   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.668719   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:02.668744   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.669026   27717 profile.go:143] Saving config to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/config.json ...
	I0422 11:07:02.669246   27717 start.go:128] duration metric: took 25.677224027s to createHost
	I0422 11:07:02.669273   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:07:02.671508   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.671839   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:02.671866   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.672015   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:07:02.672199   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:02.672380   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:02.672552   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:07:02.672795   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:07:02.673000   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0422 11:07:02.673016   27717 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 11:07:02.774061   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713784022.746888880
	
	I0422 11:07:02.774083   27717 fix.go:216] guest clock: 1713784022.746888880
	I0422 11:07:02.774089   27717 fix.go:229] Guest: 2024-04-22 11:07:02.74688888 +0000 UTC Remote: 2024-04-22 11:07:02.669261285 +0000 UTC m=+25.795587930 (delta=77.627595ms)
	I0422 11:07:02.774108   27717 fix.go:200] guest clock delta is within tolerance: 77.627595ms
	I0422 11:07:02.774113   27717 start.go:83] releasing machines lock for "ha-821265", held for 25.78216251s
	I0422 11:07:02.774131   27717 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:07:02.774387   27717 main.go:141] libmachine: (ha-821265) Calling .GetIP
	I0422 11:07:02.777343   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.777706   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:02.777743   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.777889   27717 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:07:02.778565   27717 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:07:02.778741   27717 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:07:02.778837   27717 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 11:07:02.778891   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:07:02.779131   27717 ssh_runner.go:195] Run: cat /version.json
	I0422 11:07:02.779154   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:07:02.781537   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.781682   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.781775   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:02.781800   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.781936   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:07:02.782065   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:02.782090   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:02.782115   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:02.782222   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:07:02.782356   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:07:02.782358   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:02.782523   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:07:02.782541   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:07:02.782660   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:07:02.858772   27717 ssh_runner.go:195] Run: systemctl --version
	I0422 11:07:02.884766   27717 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 11:07:03.053932   27717 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 11:07:03.060762   27717 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 11:07:03.060845   27717 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 11:07:03.079663   27717 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 11:07:03.079695   27717 start.go:494] detecting cgroup driver to use...
	I0422 11:07:03.079752   27717 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 11:07:03.099187   27717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 11:07:03.114267   27717 docker.go:217] disabling cri-docker service (if available) ...
	I0422 11:07:03.114320   27717 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 11:07:03.128831   27717 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 11:07:03.143117   27717 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 11:07:03.264431   27717 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 11:07:03.410999   27717 docker.go:233] disabling docker service ...
	I0422 11:07:03.411066   27717 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 11:07:03.427738   27717 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 11:07:03.442992   27717 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 11:07:03.590020   27717 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 11:07:03.724028   27717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 11:07:03.739776   27717 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 11:07:03.760494   27717 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 11:07:03.760566   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:07:03.771686   27717 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 11:07:03.771757   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:07:03.782899   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:07:03.793763   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:07:03.804969   27717 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 11:07:03.816456   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:07:03.827577   27717 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:07:03.847124   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:07:03.858582   27717 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 11:07:03.868759   27717 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 11:07:03.868819   27717 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 11:07:03.884219   27717 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 11:07:03.895147   27717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 11:07:04.024227   27717 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 11:07:04.169763   27717 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 11:07:04.169845   27717 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 11:07:04.175638   27717 start.go:562] Will wait 60s for crictl version
	I0422 11:07:04.175690   27717 ssh_runner.go:195] Run: which crictl
	I0422 11:07:04.179988   27717 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 11:07:04.226582   27717 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 11:07:04.226660   27717 ssh_runner.go:195] Run: crio --version
	I0422 11:07:04.257365   27717 ssh_runner.go:195] Run: crio --version
	I0422 11:07:04.295410   27717 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 11:07:04.296625   27717 main.go:141] libmachine: (ha-821265) Calling .GetIP
	I0422 11:07:04.299600   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:04.299932   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:04.299962   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:04.300216   27717 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0422 11:07:04.304879   27717 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 11:07:04.320479   27717 kubeadm.go:877] updating cluster {Name:ha-821265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-821265 Namespace:
default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 11:07:04.320578   27717 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 11:07:04.320620   27717 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 11:07:04.356973   27717 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0422 11:07:04.357047   27717 ssh_runner.go:195] Run: which lz4
	I0422 11:07:04.361530   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0422 11:07:04.361631   27717 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0422 11:07:04.366278   27717 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0422 11:07:04.366307   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0422 11:07:06.042466   27717 crio.go:462] duration metric: took 1.680867865s to copy over tarball
	I0422 11:07:06.042549   27717 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0422 11:07:08.517152   27717 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.474581288s)
	I0422 11:07:08.517178   27717 crio.go:469] duration metric: took 2.474670403s to extract the tarball
	I0422 11:07:08.517185   27717 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0422 11:07:08.557848   27717 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 11:07:08.614549   27717 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 11:07:08.614573   27717 cache_images.go:84] Images are preloaded, skipping loading
	I0422 11:07:08.614580   27717 kubeadm.go:928] updating node { 192.168.39.150 8443 v1.30.0 crio true true} ...
	I0422 11:07:08.614696   27717 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-821265 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-821265 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 11:07:08.614771   27717 ssh_runner.go:195] Run: crio config
	I0422 11:07:08.672421   27717 cni.go:84] Creating CNI manager for ""
	I0422 11:07:08.672449   27717 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0422 11:07:08.672466   27717 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 11:07:08.672491   27717 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.150 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-821265 NodeName:ha-821265 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 11:07:08.672663   27717 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.150
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-821265"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.150
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.150"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 11:07:08.672692   27717 kube-vip.go:111] generating kube-vip config ...
	I0422 11:07:08.672740   27717 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0422 11:07:08.691071   27717 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0422 11:07:08.691194   27717 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0422 11:07:08.691255   27717 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 11:07:08.703581   27717 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 11:07:08.703648   27717 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0422 11:07:08.715654   27717 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0422 11:07:08.735131   27717 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 11:07:08.754255   27717 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0422 11:07:08.773720   27717 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0422 11:07:08.792889   27717 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0422 11:07:08.797695   27717 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 11:07:08.813352   27717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 11:07:08.956712   27717 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 11:07:08.976494   27717 certs.go:68] Setting up /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265 for IP: 192.168.39.150
	I0422 11:07:08.976540   27717 certs.go:194] generating shared ca certs ...
	I0422 11:07:08.976559   27717 certs.go:226] acquiring lock for ca certs: {Name:mk0b77082b88c771d0b00be5267ca31dfee6f85a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:07:08.976742   27717 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key
	I0422 11:07:08.976832   27717 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key
	I0422 11:07:08.976847   27717 certs.go:256] generating profile certs ...
	I0422 11:07:08.976914   27717 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/client.key
	I0422 11:07:08.976930   27717 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/client.crt with IP's: []
	I0422 11:07:09.418231   27717 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/client.crt ...
	I0422 11:07:09.418257   27717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/client.crt: {Name:mk52952f8b4db593aadb2c250839f7b574f97019 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:07:09.418416   27717 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/client.key ...
	I0422 11:07:09.418426   27717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/client.key: {Name:mk8d80f7827aef3d1fd632a27cf705619b9e8dd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:07:09.418497   27717 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key.9e670a0c
	I0422 11:07:09.418511   27717 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt.9e670a0c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.150 192.168.39.254]
	I0422 11:07:09.559977   27717 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt.9e670a0c ...
	I0422 11:07:09.560006   27717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt.9e670a0c: {Name:mk0789273f8824637744f6bccf5e25fe0c785651 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:07:09.560146   27717 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key.9e670a0c ...
	I0422 11:07:09.560159   27717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key.9e670a0c: {Name:mkd7c463326ca403ace533aedb196950306b2956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:07:09.560244   27717 certs.go:381] copying /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt.9e670a0c -> /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt
	I0422 11:07:09.560313   27717 certs.go:385] copying /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key.9e670a0c -> /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key
	I0422 11:07:09.560361   27717 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.key
	I0422 11:07:09.560375   27717 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.crt with IP's: []
	I0422 11:07:09.686192   27717 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.crt ...
	I0422 11:07:09.686223   27717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.crt: {Name:mkdcbe0e829b44ac15262334df2d0ec129d534bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:07:09.686384   27717 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.key ...
	I0422 11:07:09.686394   27717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.key: {Name:mk898c3151cb501a42e5a95c8238e1c668504887 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:07:09.686466   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0422 11:07:09.686483   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0422 11:07:09.686492   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0422 11:07:09.686510   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0422 11:07:09.686523   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0422 11:07:09.686541   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0422 11:07:09.686558   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0422 11:07:09.686570   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0422 11:07:09.686618   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem (1338 bytes)
	W0422 11:07:09.686664   27717 certs.go:480] ignoring /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945_empty.pem, impossibly tiny 0 bytes
	I0422 11:07:09.686673   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem (1679 bytes)
	I0422 11:07:09.686693   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem (1078 bytes)
	I0422 11:07:09.686717   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem (1123 bytes)
	I0422 11:07:09.686740   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem (1679 bytes)
	I0422 11:07:09.686778   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem (1708 bytes)
	I0422 11:07:09.686801   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem -> /usr/share/ca-certificates/14945.pem
	I0422 11:07:09.686814   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> /usr/share/ca-certificates/149452.pem
	I0422 11:07:09.686826   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:07:09.687394   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 11:07:09.719007   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 11:07:09.752361   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 11:07:09.786230   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0422 11:07:09.826254   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0422 11:07:09.854135   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 11:07:09.880891   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 11:07:09.908450   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 11:07:09.937772   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem --> /usr/share/ca-certificates/14945.pem (1338 bytes)
	I0422 11:07:09.964957   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem --> /usr/share/ca-certificates/149452.pem (1708 bytes)
	I0422 11:07:09.992047   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 11:07:10.018680   27717 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 11:07:10.039838   27717 ssh_runner.go:195] Run: openssl version
	I0422 11:07:10.046726   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149452.pem && ln -fs /usr/share/ca-certificates/149452.pem /etc/ssl/certs/149452.pem"
	I0422 11:07:10.059914   27717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149452.pem
	I0422 11:07:10.065454   27717 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 10:51 /usr/share/ca-certificates/149452.pem
	I0422 11:07:10.065516   27717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149452.pem
	I0422 11:07:10.072114   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149452.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 11:07:10.085601   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 11:07:10.098686   27717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:07:10.103949   27717 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:07:10.104023   27717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:07:10.110511   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 11:07:10.123334   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14945.pem && ln -fs /usr/share/ca-certificates/14945.pem /etc/ssl/certs/14945.pem"
	I0422 11:07:10.136466   27717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14945.pem
	I0422 11:07:10.141665   27717 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 10:51 /usr/share/ca-certificates/14945.pem
	I0422 11:07:10.141714   27717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14945.pem
	I0422 11:07:10.148031   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14945.pem /etc/ssl/certs/51391683.0"
	I0422 11:07:10.161327   27717 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 11:07:10.165970   27717 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0422 11:07:10.166014   27717 kubeadm.go:391] StartCluster: {Name:ha-821265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-821265 Namespace:def
ault APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 11:07:10.166086   27717 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 11:07:10.166125   27717 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 11:07:10.211220   27717 cri.go:89] found id: ""
	I0422 11:07:10.211280   27717 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0422 11:07:10.223189   27717 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 11:07:10.234230   27717 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 11:07:10.245366   27717 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 11:07:10.245396   27717 kubeadm.go:156] found existing configuration files:
	
	I0422 11:07:10.245436   27717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 11:07:10.255832   27717 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 11:07:10.255887   27717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 11:07:10.267058   27717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 11:07:10.278221   27717 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 11:07:10.278286   27717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 11:07:10.289797   27717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 11:07:10.300487   27717 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 11:07:10.300547   27717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 11:07:10.311149   27717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 11:07:10.321852   27717 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 11:07:10.321927   27717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 11:07:10.333728   27717 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 11:07:10.440238   27717 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0422 11:07:10.440307   27717 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 11:07:10.608397   27717 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 11:07:10.608523   27717 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 11:07:10.608647   27717 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 11:07:10.850748   27717 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 11:07:10.960280   27717 out.go:204]   - Generating certificates and keys ...
	I0422 11:07:10.960408   27717 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 11:07:10.960497   27717 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 11:07:11.181371   27717 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0422 11:07:11.287702   27717 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0422 11:07:11.629487   27717 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0422 11:07:11.731677   27717 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0422 11:07:11.859817   27717 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0422 11:07:11.860017   27717 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-821265 localhost] and IPs [192.168.39.150 127.0.0.1 ::1]
	I0422 11:07:11.948558   27717 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0422 11:07:12.006501   27717 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-821265 localhost] and IPs [192.168.39.150 127.0.0.1 ::1]
	I0422 11:07:12.447883   27717 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0422 11:07:12.714302   27717 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0422 11:07:12.795236   27717 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0422 11:07:12.795355   27717 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 11:07:12.956592   27717 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 11:07:13.238680   27717 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0422 11:07:13.406825   27717 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 11:07:13.748333   27717 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 11:07:14.012055   27717 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 11:07:14.012755   27717 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 11:07:14.016020   27717 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 11:07:14.019738   27717 out.go:204]   - Booting up control plane ...
	I0422 11:07:14.019859   27717 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 11:07:14.019984   27717 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 11:07:14.020069   27717 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 11:07:14.039659   27717 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 11:07:14.042042   27717 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 11:07:14.042100   27717 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 11:07:14.176609   27717 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0422 11:07:14.176741   27717 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0422 11:07:15.178523   27717 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002710448s
	I0422 11:07:15.178595   27717 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0422 11:07:20.847642   27717 kubeadm.go:309] [api-check] The API server is healthy after 5.67237434s
	I0422 11:07:20.860726   27717 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0422 11:07:20.884143   27717 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0422 11:07:20.912922   27717 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0422 11:07:20.913118   27717 kubeadm.go:309] [mark-control-plane] Marking the node ha-821265 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0422 11:07:20.928649   27717 kubeadm.go:309] [bootstrap-token] Using token: yuo67z.grhhzrpl1n2nxox8
	I0422 11:07:20.930298   27717 out.go:204]   - Configuring RBAC rules ...
	I0422 11:07:20.930431   27717 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0422 11:07:20.937411   27717 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0422 11:07:20.948557   27717 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0422 11:07:20.952520   27717 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0422 11:07:20.956537   27717 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0422 11:07:20.959717   27717 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0422 11:07:21.255254   27717 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0422 11:07:21.708154   27717 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0422 11:07:22.262044   27717 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0422 11:07:22.262081   27717 kubeadm.go:309] 
	I0422 11:07:22.262177   27717 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0422 11:07:22.262190   27717 kubeadm.go:309] 
	I0422 11:07:22.262284   27717 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0422 11:07:22.262297   27717 kubeadm.go:309] 
	I0422 11:07:22.262352   27717 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0422 11:07:22.262427   27717 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0422 11:07:22.262507   27717 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0422 11:07:22.262520   27717 kubeadm.go:309] 
	I0422 11:07:22.262601   27717 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0422 11:07:22.262616   27717 kubeadm.go:309] 
	I0422 11:07:22.262689   27717 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0422 11:07:22.262707   27717 kubeadm.go:309] 
	I0422 11:07:22.262785   27717 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0422 11:07:22.262890   27717 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0422 11:07:22.262998   27717 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0422 11:07:22.263012   27717 kubeadm.go:309] 
	I0422 11:07:22.263130   27717 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0422 11:07:22.263234   27717 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0422 11:07:22.263247   27717 kubeadm.go:309] 
	I0422 11:07:22.263369   27717 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token yuo67z.grhhzrpl1n2nxox8 \
	I0422 11:07:22.263515   27717 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9af0990320e7d045d6d22d7e7276e7343f7b0ca44ae792f4916b4a79d803646f \
	I0422 11:07:22.263553   27717 kubeadm.go:309] 	--control-plane 
	I0422 11:07:22.263562   27717 kubeadm.go:309] 
	I0422 11:07:22.263661   27717 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0422 11:07:22.263672   27717 kubeadm.go:309] 
	I0422 11:07:22.263808   27717 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token yuo67z.grhhzrpl1n2nxox8 \
	I0422 11:07:22.263949   27717 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9af0990320e7d045d6d22d7e7276e7343f7b0ca44ae792f4916b4a79d803646f 
	I0422 11:07:22.264112   27717 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 11:07:22.264148   27717 cni.go:84] Creating CNI manager for ""
	I0422 11:07:22.264162   27717 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0422 11:07:22.266062   27717 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0422 11:07:22.267446   27717 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0422 11:07:22.273513   27717 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0422 11:07:22.273527   27717 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0422 11:07:22.292914   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0422 11:07:22.660410   27717 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 11:07:22.660535   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:22.660539   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-821265 minikube.k8s.io/updated_at=2024_04_22T11_07_22_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437 minikube.k8s.io/name=ha-821265 minikube.k8s.io/primary=true
	I0422 11:07:22.693421   27717 ops.go:34] apiserver oom_adj: -16
	I0422 11:07:22.849804   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:23.350189   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:23.850615   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:24.350927   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:24.849910   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:25.350015   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:25.850250   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:26.350024   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:26.850681   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:27.349901   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:27.850740   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:28.350694   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:28.850222   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:29.349986   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:29.850702   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:30.350742   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:30.850442   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:31.349847   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:31.850029   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:32.349941   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:32.850871   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:33.350491   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:33.849979   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:34.349942   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:34.850271   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:35.350692   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:35.850705   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:36.350530   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:36.850731   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:37.350006   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 11:07:37.492355   27717 kubeadm.go:1107] duration metric: took 14.831883199s to wait for elevateKubeSystemPrivileges
	W0422 11:07:37.492403   27717 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0422 11:07:37.492412   27717 kubeadm.go:393] duration metric: took 27.326400295s to StartCluster
	I0422 11:07:37.492431   27717 settings.go:142] acquiring lock: {Name:mkd680667f0df4166491741d55b55ac111bb0138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:07:37.492511   27717 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18711-7633/kubeconfig
	I0422 11:07:37.493319   27717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/kubeconfig: {Name:mkee6de4c6906fe5621e8aeac858a93219648db5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:07:37.493562   27717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0422 11:07:37.493580   27717 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0422 11:07:37.493634   27717 addons.go:69] Setting storage-provisioner=true in profile "ha-821265"
	I0422 11:07:37.493659   27717 addons.go:69] Setting default-storageclass=true in profile "ha-821265"
	I0422 11:07:37.493705   27717 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-821265"
	I0422 11:07:37.493739   27717 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:07:37.493562   27717 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 11:07:37.493785   27717 start.go:240] waiting for startup goroutines ...
	I0422 11:07:37.493664   27717 addons.go:234] Setting addon storage-provisioner=true in "ha-821265"
	I0422 11:07:37.493831   27717 host.go:66] Checking if "ha-821265" exists ...
	I0422 11:07:37.494037   27717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:07:37.494059   27717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:07:37.494223   27717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:07:37.494257   27717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:07:37.508555   27717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40121
	I0422 11:07:37.508611   27717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37983
	I0422 11:07:37.509008   27717 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:07:37.509046   27717 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:07:37.509515   27717 main.go:141] libmachine: Using API Version  1
	I0422 11:07:37.509535   27717 main.go:141] libmachine: Using API Version  1
	I0422 11:07:37.509545   27717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:07:37.509551   27717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:07:37.509906   27717 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:07:37.509946   27717 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:07:37.510119   27717 main.go:141] libmachine: (ha-821265) Calling .GetState
	I0422 11:07:37.510502   27717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:07:37.510536   27717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:07:37.512267   27717 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18711-7633/kubeconfig
	I0422 11:07:37.512584   27717 kapi.go:59] client config for ha-821265: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/client.crt", KeyFile:"/home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/client.key", CAFile:"/home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0422 11:07:37.513095   27717 cert_rotation.go:137] Starting client certificate rotation controller
	I0422 11:07:37.513356   27717 addons.go:234] Setting addon default-storageclass=true in "ha-821265"
	I0422 11:07:37.513400   27717 host.go:66] Checking if "ha-821265" exists ...
	I0422 11:07:37.513797   27717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:07:37.513844   27717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:07:37.526083   27717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43051
	I0422 11:07:37.526636   27717 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:07:37.527148   27717 main.go:141] libmachine: Using API Version  1
	I0422 11:07:37.527166   27717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:07:37.527494   27717 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:07:37.527677   27717 main.go:141] libmachine: (ha-821265) Calling .GetState
	I0422 11:07:37.527950   27717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45095
	I0422 11:07:37.528423   27717 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:07:37.528961   27717 main.go:141] libmachine: Using API Version  1
	I0422 11:07:37.528992   27717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:07:37.529325   27717 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:07:37.529334   27717 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:07:37.531582   27717 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 11:07:37.529873   27717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:07:37.533014   27717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:07:37.533096   27717 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 11:07:37.533112   27717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0422 11:07:37.533130   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:07:37.536149   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:37.536532   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:37.536564   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:37.536756   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:07:37.536999   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:37.537161   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:07:37.537326   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:07:37.547901   27717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34973
	I0422 11:07:37.548257   27717 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:07:37.548732   27717 main.go:141] libmachine: Using API Version  1
	I0422 11:07:37.548757   27717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:07:37.549107   27717 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:07:37.549292   27717 main.go:141] libmachine: (ha-821265) Calling .GetState
	I0422 11:07:37.550876   27717 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:07:37.551112   27717 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0422 11:07:37.551126   27717 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0422 11:07:37.551142   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:07:37.553701   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:37.554028   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:07:37.554054   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:07:37.554172   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:07:37.554367   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:07:37.554512   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:07:37.554659   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:07:37.666344   27717 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0422 11:07:37.683340   27717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 11:07:37.776823   27717 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 11:07:38.264702   27717 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0422 11:07:38.584466   27717 main.go:141] libmachine: Making call to close driver server
	I0422 11:07:38.584490   27717 main.go:141] libmachine: (ha-821265) Calling .Close
	I0422 11:07:38.584489   27717 main.go:141] libmachine: Making call to close driver server
	I0422 11:07:38.584499   27717 main.go:141] libmachine: (ha-821265) Calling .Close
	I0422 11:07:38.584843   27717 main.go:141] libmachine: Successfully made call to close driver server
	I0422 11:07:38.584862   27717 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 11:07:38.584871   27717 main.go:141] libmachine: Making call to close driver server
	I0422 11:07:38.584878   27717 main.go:141] libmachine: (ha-821265) Calling .Close
	I0422 11:07:38.584892   27717 main.go:141] libmachine: (ha-821265) DBG | Closing plugin on server side
	I0422 11:07:38.584921   27717 main.go:141] libmachine: Successfully made call to close driver server
	I0422 11:07:38.584939   27717 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 11:07:38.584949   27717 main.go:141] libmachine: Making call to close driver server
	I0422 11:07:38.584960   27717 main.go:141] libmachine: (ha-821265) Calling .Close
	I0422 11:07:38.585165   27717 main.go:141] libmachine: Successfully made call to close driver server
	I0422 11:07:38.585186   27717 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 11:07:38.585245   27717 main.go:141] libmachine: (ha-821265) DBG | Closing plugin on server side
	I0422 11:07:38.585275   27717 main.go:141] libmachine: Successfully made call to close driver server
	I0422 11:07:38.585288   27717 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 11:07:38.585409   27717 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0422 11:07:38.585422   27717 round_trippers.go:469] Request Headers:
	I0422 11:07:38.585439   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:07:38.585446   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:07:38.599077   27717 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0422 11:07:38.599625   27717 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0422 11:07:38.599640   27717 round_trippers.go:469] Request Headers:
	I0422 11:07:38.599647   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:07:38.599651   27717 round_trippers.go:473]     Content-Type: application/json
	I0422 11:07:38.599653   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:07:38.602433   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:07:38.602601   27717 main.go:141] libmachine: Making call to close driver server
	I0422 11:07:38.602622   27717 main.go:141] libmachine: (ha-821265) Calling .Close
	I0422 11:07:38.602886   27717 main.go:141] libmachine: Successfully made call to close driver server
	I0422 11:07:38.602904   27717 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 11:07:38.602909   27717 main.go:141] libmachine: (ha-821265) DBG | Closing plugin on server side
	I0422 11:07:38.605613   27717 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0422 11:07:38.607072   27717 addons.go:505] duration metric: took 1.113487551s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0422 11:07:38.607108   27717 start.go:245] waiting for cluster config update ...
	I0422 11:07:38.607123   27717 start.go:254] writing updated cluster config ...
	I0422 11:07:38.608878   27717 out.go:177] 
	I0422 11:07:38.610515   27717 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:07:38.610586   27717 profile.go:143] Saving config to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/config.json ...
	I0422 11:07:38.612341   27717 out.go:177] * Starting "ha-821265-m02" control-plane node in "ha-821265" cluster
	I0422 11:07:38.613595   27717 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 11:07:38.613614   27717 cache.go:56] Caching tarball of preloaded images
	I0422 11:07:38.613693   27717 preload.go:173] Found /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 11:07:38.613733   27717 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 11:07:38.613804   27717 profile.go:143] Saving config to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/config.json ...
	I0422 11:07:38.613988   27717 start.go:360] acquireMachinesLock for ha-821265-m02: {Name:mk5cb9b294e703b264c1f97ac968ffd01e93b576 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 11:07:38.614031   27717 start.go:364] duration metric: took 23.705µs to acquireMachinesLock for "ha-821265-m02"
	I0422 11:07:38.614047   27717 start.go:93] Provisioning new machine with config: &{Name:ha-821265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:h
a-821265 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 11:07:38.614111   27717 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0422 11:07:38.615767   27717 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0422 11:07:38.615865   27717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:07:38.615894   27717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:07:38.630236   27717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43337
	I0422 11:07:38.630684   27717 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:07:38.631201   27717 main.go:141] libmachine: Using API Version  1
	I0422 11:07:38.631224   27717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:07:38.631528   27717 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:07:38.631771   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetMachineName
	I0422 11:07:38.631910   27717 main.go:141] libmachine: (ha-821265-m02) Calling .DriverName
	I0422 11:07:38.632051   27717 start.go:159] libmachine.API.Create for "ha-821265" (driver="kvm2")
	I0422 11:07:38.632075   27717 client.go:168] LocalClient.Create starting
	I0422 11:07:38.632097   27717 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem
	I0422 11:07:38.632124   27717 main.go:141] libmachine: Decoding PEM data...
	I0422 11:07:38.632140   27717 main.go:141] libmachine: Parsing certificate...
	I0422 11:07:38.632188   27717 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem
	I0422 11:07:38.632205   27717 main.go:141] libmachine: Decoding PEM data...
	I0422 11:07:38.632215   27717 main.go:141] libmachine: Parsing certificate...
	I0422 11:07:38.632228   27717 main.go:141] libmachine: Running pre-create checks...
	I0422 11:07:38.632236   27717 main.go:141] libmachine: (ha-821265-m02) Calling .PreCreateCheck
	I0422 11:07:38.632440   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetConfigRaw
	I0422 11:07:38.632832   27717 main.go:141] libmachine: Creating machine...
	I0422 11:07:38.632847   27717 main.go:141] libmachine: (ha-821265-m02) Calling .Create
	I0422 11:07:38.632966   27717 main.go:141] libmachine: (ha-821265-m02) Creating KVM machine...
	I0422 11:07:38.634262   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found existing default KVM network
	I0422 11:07:38.634429   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found existing private KVM network mk-ha-821265
	I0422 11:07:38.634613   27717 main.go:141] libmachine: (ha-821265-m02) Setting up store path in /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02 ...
	I0422 11:07:38.634637   27717 main.go:141] libmachine: (ha-821265-m02) Building disk image from file:///home/jenkins/minikube-integration/18711-7633/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0422 11:07:38.634697   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:38.634608   28122 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 11:07:38.634839   27717 main.go:141] libmachine: (ha-821265-m02) Downloading /home/jenkins/minikube-integration/18711-7633/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18711-7633/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0422 11:07:38.858903   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:38.858752   28122 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02/id_rsa...
	I0422 11:07:39.068919   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:39.068788   28122 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02/ha-821265-m02.rawdisk...
	I0422 11:07:39.068952   27717 main.go:141] libmachine: (ha-821265-m02) DBG | Writing magic tar header
	I0422 11:07:39.068966   27717 main.go:141] libmachine: (ha-821265-m02) DBG | Writing SSH key tar header
	I0422 11:07:39.068978   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:39.068894   28122 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02 ...
	I0422 11:07:39.068993   27717 main.go:141] libmachine: (ha-821265-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02
	I0422 11:07:39.069051   27717 main.go:141] libmachine: (ha-821265-m02) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02 (perms=drwx------)
	I0422 11:07:39.069084   27717 main.go:141] libmachine: (ha-821265-m02) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633/.minikube/machines (perms=drwxr-xr-x)
	I0422 11:07:39.069100   27717 main.go:141] libmachine: (ha-821265-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633/.minikube/machines
	I0422 11:07:39.069113   27717 main.go:141] libmachine: (ha-821265-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 11:07:39.069121   27717 main.go:141] libmachine: (ha-821265-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633
	I0422 11:07:39.069130   27717 main.go:141] libmachine: (ha-821265-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0422 11:07:39.069137   27717 main.go:141] libmachine: (ha-821265-m02) DBG | Checking permissions on dir: /home/jenkins
	I0422 11:07:39.069150   27717 main.go:141] libmachine: (ha-821265-m02) DBG | Checking permissions on dir: /home
	I0422 11:07:39.069168   27717 main.go:141] libmachine: (ha-821265-m02) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633/.minikube (perms=drwxr-xr-x)
	I0422 11:07:39.069179   27717 main.go:141] libmachine: (ha-821265-m02) DBG | Skipping /home - not owner
	I0422 11:07:39.069196   27717 main.go:141] libmachine: (ha-821265-m02) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633 (perms=drwxrwxr-x)
	I0422 11:07:39.069208   27717 main.go:141] libmachine: (ha-821265-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0422 11:07:39.069219   27717 main.go:141] libmachine: (ha-821265-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0422 11:07:39.069228   27717 main.go:141] libmachine: (ha-821265-m02) Creating domain...
	I0422 11:07:39.070037   27717 main.go:141] libmachine: (ha-821265-m02) define libvirt domain using xml: 
	I0422 11:07:39.070052   27717 main.go:141] libmachine: (ha-821265-m02) <domain type='kvm'>
	I0422 11:07:39.070061   27717 main.go:141] libmachine: (ha-821265-m02)   <name>ha-821265-m02</name>
	I0422 11:07:39.070067   27717 main.go:141] libmachine: (ha-821265-m02)   <memory unit='MiB'>2200</memory>
	I0422 11:07:39.070075   27717 main.go:141] libmachine: (ha-821265-m02)   <vcpu>2</vcpu>
	I0422 11:07:39.070085   27717 main.go:141] libmachine: (ha-821265-m02)   <features>
	I0422 11:07:39.070099   27717 main.go:141] libmachine: (ha-821265-m02)     <acpi/>
	I0422 11:07:39.070109   27717 main.go:141] libmachine: (ha-821265-m02)     <apic/>
	I0422 11:07:39.070121   27717 main.go:141] libmachine: (ha-821265-m02)     <pae/>
	I0422 11:07:39.070137   27717 main.go:141] libmachine: (ha-821265-m02)     
	I0422 11:07:39.070149   27717 main.go:141] libmachine: (ha-821265-m02)   </features>
	I0422 11:07:39.070165   27717 main.go:141] libmachine: (ha-821265-m02)   <cpu mode='host-passthrough'>
	I0422 11:07:39.070176   27717 main.go:141] libmachine: (ha-821265-m02)   
	I0422 11:07:39.070184   27717 main.go:141] libmachine: (ha-821265-m02)   </cpu>
	I0422 11:07:39.070194   27717 main.go:141] libmachine: (ha-821265-m02)   <os>
	I0422 11:07:39.070205   27717 main.go:141] libmachine: (ha-821265-m02)     <type>hvm</type>
	I0422 11:07:39.070217   27717 main.go:141] libmachine: (ha-821265-m02)     <boot dev='cdrom'/>
	I0422 11:07:39.070229   27717 main.go:141] libmachine: (ha-821265-m02)     <boot dev='hd'/>
	I0422 11:07:39.070241   27717 main.go:141] libmachine: (ha-821265-m02)     <bootmenu enable='no'/>
	I0422 11:07:39.070253   27717 main.go:141] libmachine: (ha-821265-m02)   </os>
	I0422 11:07:39.070263   27717 main.go:141] libmachine: (ha-821265-m02)   <devices>
	I0422 11:07:39.070274   27717 main.go:141] libmachine: (ha-821265-m02)     <disk type='file' device='cdrom'>
	I0422 11:07:39.070289   27717 main.go:141] libmachine: (ha-821265-m02)       <source file='/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02/boot2docker.iso'/>
	I0422 11:07:39.070301   27717 main.go:141] libmachine: (ha-821265-m02)       <target dev='hdc' bus='scsi'/>
	I0422 11:07:39.070314   27717 main.go:141] libmachine: (ha-821265-m02)       <readonly/>
	I0422 11:07:39.070324   27717 main.go:141] libmachine: (ha-821265-m02)     </disk>
	I0422 11:07:39.070348   27717 main.go:141] libmachine: (ha-821265-m02)     <disk type='file' device='disk'>
	I0422 11:07:39.070373   27717 main.go:141] libmachine: (ha-821265-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0422 11:07:39.070391   27717 main.go:141] libmachine: (ha-821265-m02)       <source file='/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02/ha-821265-m02.rawdisk'/>
	I0422 11:07:39.070410   27717 main.go:141] libmachine: (ha-821265-m02)       <target dev='hda' bus='virtio'/>
	I0422 11:07:39.070422   27717 main.go:141] libmachine: (ha-821265-m02)     </disk>
	I0422 11:07:39.070437   27717 main.go:141] libmachine: (ha-821265-m02)     <interface type='network'>
	I0422 11:07:39.070447   27717 main.go:141] libmachine: (ha-821265-m02)       <source network='mk-ha-821265'/>
	I0422 11:07:39.070458   27717 main.go:141] libmachine: (ha-821265-m02)       <model type='virtio'/>
	I0422 11:07:39.070470   27717 main.go:141] libmachine: (ha-821265-m02)     </interface>
	I0422 11:07:39.070481   27717 main.go:141] libmachine: (ha-821265-m02)     <interface type='network'>
	I0422 11:07:39.070492   27717 main.go:141] libmachine: (ha-821265-m02)       <source network='default'/>
	I0422 11:07:39.070502   27717 main.go:141] libmachine: (ha-821265-m02)       <model type='virtio'/>
	I0422 11:07:39.070513   27717 main.go:141] libmachine: (ha-821265-m02)     </interface>
	I0422 11:07:39.070527   27717 main.go:141] libmachine: (ha-821265-m02)     <serial type='pty'>
	I0422 11:07:39.070535   27717 main.go:141] libmachine: (ha-821265-m02)       <target port='0'/>
	I0422 11:07:39.070545   27717 main.go:141] libmachine: (ha-821265-m02)     </serial>
	I0422 11:07:39.070557   27717 main.go:141] libmachine: (ha-821265-m02)     <console type='pty'>
	I0422 11:07:39.070569   27717 main.go:141] libmachine: (ha-821265-m02)       <target type='serial' port='0'/>
	I0422 11:07:39.070581   27717 main.go:141] libmachine: (ha-821265-m02)     </console>
	I0422 11:07:39.070594   27717 main.go:141] libmachine: (ha-821265-m02)     <rng model='virtio'>
	I0422 11:07:39.070608   27717 main.go:141] libmachine: (ha-821265-m02)       <backend model='random'>/dev/random</backend>
	I0422 11:07:39.070616   27717 main.go:141] libmachine: (ha-821265-m02)     </rng>
	I0422 11:07:39.070624   27717 main.go:141] libmachine: (ha-821265-m02)     
	I0422 11:07:39.070635   27717 main.go:141] libmachine: (ha-821265-m02)     
	I0422 11:07:39.070648   27717 main.go:141] libmachine: (ha-821265-m02)   </devices>
	I0422 11:07:39.070658   27717 main.go:141] libmachine: (ha-821265-m02) </domain>
	I0422 11:07:39.070697   27717 main.go:141] libmachine: (ha-821265-m02) 
	I0422 11:07:39.076687   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:7d:91:8e in network default
	I0422 11:07:39.077253   27717 main.go:141] libmachine: (ha-821265-m02) Ensuring networks are active...
	I0422 11:07:39.077271   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:39.078064   27717 main.go:141] libmachine: (ha-821265-m02) Ensuring network default is active
	I0422 11:07:39.078404   27717 main.go:141] libmachine: (ha-821265-m02) Ensuring network mk-ha-821265 is active
	I0422 11:07:39.078879   27717 main.go:141] libmachine: (ha-821265-m02) Getting domain xml...
	I0422 11:07:39.079496   27717 main.go:141] libmachine: (ha-821265-m02) Creating domain...
	I0422 11:07:40.281067   27717 main.go:141] libmachine: (ha-821265-m02) Waiting to get IP...
	I0422 11:07:40.281872   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:40.282331   27717 main.go:141] libmachine: (ha-821265-m02) DBG | unable to find current IP address of domain ha-821265-m02 in network mk-ha-821265
	I0422 11:07:40.282378   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:40.282308   28122 retry.go:31] will retry after 209.923235ms: waiting for machine to come up
	I0422 11:07:40.493858   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:40.494350   27717 main.go:141] libmachine: (ha-821265-m02) DBG | unable to find current IP address of domain ha-821265-m02 in network mk-ha-821265
	I0422 11:07:40.494385   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:40.494301   28122 retry.go:31] will retry after 252.288683ms: waiting for machine to come up
	I0422 11:07:40.747583   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:40.748156   27717 main.go:141] libmachine: (ha-821265-m02) DBG | unable to find current IP address of domain ha-821265-m02 in network mk-ha-821265
	I0422 11:07:40.748182   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:40.748095   28122 retry.go:31] will retry after 406.145373ms: waiting for machine to come up
	I0422 11:07:41.155279   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:41.155756   27717 main.go:141] libmachine: (ha-821265-m02) DBG | unable to find current IP address of domain ha-821265-m02 in network mk-ha-821265
	I0422 11:07:41.155778   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:41.155721   28122 retry.go:31] will retry after 394.52636ms: waiting for machine to come up
	I0422 11:07:41.552175   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:41.552562   27717 main.go:141] libmachine: (ha-821265-m02) DBG | unable to find current IP address of domain ha-821265-m02 in network mk-ha-821265
	I0422 11:07:41.552592   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:41.552542   28122 retry.go:31] will retry after 573.105029ms: waiting for machine to come up
	I0422 11:07:42.126984   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:42.127466   27717 main.go:141] libmachine: (ha-821265-m02) DBG | unable to find current IP address of domain ha-821265-m02 in network mk-ha-821265
	I0422 11:07:42.127497   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:42.127417   28122 retry.go:31] will retry after 582.958863ms: waiting for machine to come up
	I0422 11:07:42.712332   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:42.712816   27717 main.go:141] libmachine: (ha-821265-m02) DBG | unable to find current IP address of domain ha-821265-m02 in network mk-ha-821265
	I0422 11:07:42.712846   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:42.712764   28122 retry.go:31] will retry after 730.242889ms: waiting for machine to come up
	I0422 11:07:43.444527   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:43.445079   27717 main.go:141] libmachine: (ha-821265-m02) DBG | unable to find current IP address of domain ha-821265-m02 in network mk-ha-821265
	I0422 11:07:43.445111   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:43.445027   28122 retry.go:31] will retry after 1.362127335s: waiting for machine to come up
	I0422 11:07:44.809161   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:44.809551   27717 main.go:141] libmachine: (ha-821265-m02) DBG | unable to find current IP address of domain ha-821265-m02 in network mk-ha-821265
	I0422 11:07:44.809581   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:44.809497   28122 retry.go:31] will retry after 1.496080323s: waiting for machine to come up
	I0422 11:07:46.308152   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:46.308736   27717 main.go:141] libmachine: (ha-821265-m02) DBG | unable to find current IP address of domain ha-821265-m02 in network mk-ha-821265
	I0422 11:07:46.308792   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:46.308665   28122 retry.go:31] will retry after 1.432513378s: waiting for machine to come up
	I0422 11:07:47.743407   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:47.743849   27717 main.go:141] libmachine: (ha-821265-m02) DBG | unable to find current IP address of domain ha-821265-m02 in network mk-ha-821265
	I0422 11:07:47.743880   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:47.743807   28122 retry.go:31] will retry after 2.384548765s: waiting for machine to come up
	I0422 11:07:50.130638   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:50.131138   27717 main.go:141] libmachine: (ha-821265-m02) DBG | unable to find current IP address of domain ha-821265-m02 in network mk-ha-821265
	I0422 11:07:50.131173   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:50.131098   28122 retry.go:31] will retry after 2.477699962s: waiting for machine to come up
	I0422 11:07:52.611732   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:52.612157   27717 main.go:141] libmachine: (ha-821265-m02) DBG | unable to find current IP address of domain ha-821265-m02 in network mk-ha-821265
	I0422 11:07:52.612172   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:52.612123   28122 retry.go:31] will retry after 3.533482498s: waiting for machine to come up
	I0422 11:07:56.147614   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:56.148219   27717 main.go:141] libmachine: (ha-821265-m02) DBG | unable to find current IP address of domain ha-821265-m02 in network mk-ha-821265
	I0422 11:07:56.148245   27717 main.go:141] libmachine: (ha-821265-m02) DBG | I0422 11:07:56.148156   28122 retry.go:31] will retry after 3.799865165s: waiting for machine to come up
	I0422 11:07:59.949768   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:59.950217   27717 main.go:141] libmachine: (ha-821265-m02) Found IP for machine: 192.168.39.39
	I0422 11:07:59.950249   27717 main.go:141] libmachine: (ha-821265-m02) Reserving static IP address...
	I0422 11:07:59.950261   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has current primary IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:07:59.950604   27717 main.go:141] libmachine: (ha-821265-m02) DBG | unable to find host DHCP lease matching {name: "ha-821265-m02", mac: "52:54:00:3b:2d:41", ip: "192.168.39.39"} in network mk-ha-821265
	I0422 11:08:00.024915   27717 main.go:141] libmachine: (ha-821265-m02) Reserved static IP address: 192.168.39.39
	I0422 11:08:00.024946   27717 main.go:141] libmachine: (ha-821265-m02) Waiting for SSH to be available...
	I0422 11:08:00.024957   27717 main.go:141] libmachine: (ha-821265-m02) DBG | Getting to WaitForSSH function...
	I0422 11:08:00.027330   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.027693   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:00.027719   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.027917   27717 main.go:141] libmachine: (ha-821265-m02) DBG | Using SSH client type: external
	I0422 11:08:00.027947   27717 main.go:141] libmachine: (ha-821265-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02/id_rsa (-rw-------)
	I0422 11:08:00.027995   27717 main.go:141] libmachine: (ha-821265-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.39 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 11:08:00.028009   27717 main.go:141] libmachine: (ha-821265-m02) DBG | About to run SSH command:
	I0422 11:08:00.028032   27717 main.go:141] libmachine: (ha-821265-m02) DBG | exit 0
	I0422 11:08:00.148973   27717 main.go:141] libmachine: (ha-821265-m02) DBG | SSH cmd err, output: <nil>: 
	I0422 11:08:00.149269   27717 main.go:141] libmachine: (ha-821265-m02) KVM machine creation complete!
	I0422 11:08:00.149663   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetConfigRaw
	I0422 11:08:00.150197   27717 main.go:141] libmachine: (ha-821265-m02) Calling .DriverName
	I0422 11:08:00.150434   27717 main.go:141] libmachine: (ha-821265-m02) Calling .DriverName
	I0422 11:08:00.150596   27717 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0422 11:08:00.150616   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetState
	I0422 11:08:00.151900   27717 main.go:141] libmachine: Detecting operating system of created instance...
	I0422 11:08:00.151912   27717 main.go:141] libmachine: Waiting for SSH to be available...
	I0422 11:08:00.151920   27717 main.go:141] libmachine: Getting to WaitForSSH function...
	I0422 11:08:00.151927   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHHostname
	I0422 11:08:00.154396   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.154898   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:00.154928   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.155188   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHPort
	I0422 11:08:00.155369   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:08:00.155530   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:08:00.155650   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHUsername
	I0422 11:08:00.155845   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:08:00.156048   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0422 11:08:00.156061   27717 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0422 11:08:00.256261   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 11:08:00.256288   27717 main.go:141] libmachine: Detecting the provisioner...
	I0422 11:08:00.256298   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHHostname
	I0422 11:08:00.259064   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.259471   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:00.259499   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.259666   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHPort
	I0422 11:08:00.259884   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:08:00.260049   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:08:00.260211   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHUsername
	I0422 11:08:00.260385   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:08:00.260534   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0422 11:08:00.260545   27717 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0422 11:08:00.362338   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0422 11:08:00.362410   27717 main.go:141] libmachine: found compatible host: buildroot
	I0422 11:08:00.362420   27717 main.go:141] libmachine: Provisioning with buildroot...
	I0422 11:08:00.362429   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetMachineName
	I0422 11:08:00.362633   27717 buildroot.go:166] provisioning hostname "ha-821265-m02"
	I0422 11:08:00.362652   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetMachineName
	I0422 11:08:00.362824   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHHostname
	I0422 11:08:00.365061   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.365427   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:00.365459   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.365605   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHPort
	I0422 11:08:00.365773   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:08:00.365932   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:08:00.366062   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHUsername
	I0422 11:08:00.366217   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:08:00.366418   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0422 11:08:00.366435   27717 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-821265-m02 && echo "ha-821265-m02" | sudo tee /etc/hostname
	I0422 11:08:00.483472   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-821265-m02
	
	I0422 11:08:00.483501   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHHostname
	I0422 11:08:00.486241   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.486647   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:00.486672   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.486906   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHPort
	I0422 11:08:00.487097   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:08:00.487295   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:08:00.487455   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHUsername
	I0422 11:08:00.487634   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:08:00.487793   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0422 11:08:00.487809   27717 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-821265-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-821265-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-821265-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 11:08:00.599788   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 11:08:00.599822   27717 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18711-7633/.minikube CaCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18711-7633/.minikube}
	I0422 11:08:00.599847   27717 buildroot.go:174] setting up certificates
	I0422 11:08:00.599856   27717 provision.go:84] configureAuth start
	I0422 11:08:00.599866   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetMachineName
	I0422 11:08:00.600165   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetIP
	I0422 11:08:00.602844   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.603226   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:00.603252   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.603396   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHHostname
	I0422 11:08:00.605548   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.605811   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:00.605835   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.605963   27717 provision.go:143] copyHostCerts
	I0422 11:08:00.605994   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem
	I0422 11:08:00.606026   27717 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem, removing ...
	I0422 11:08:00.606035   27717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem
	I0422 11:08:00.606094   27717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem (1078 bytes)
	I0422 11:08:00.606159   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem
	I0422 11:08:00.606175   27717 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem, removing ...
	I0422 11:08:00.606182   27717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem
	I0422 11:08:00.606204   27717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem (1123 bytes)
	I0422 11:08:00.606245   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem
	I0422 11:08:00.606279   27717 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem, removing ...
	I0422 11:08:00.606283   27717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem
	I0422 11:08:00.606303   27717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem (1679 bytes)
	I0422 11:08:00.606348   27717 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem org=jenkins.ha-821265-m02 san=[127.0.0.1 192.168.39.39 ha-821265-m02 localhost minikube]
	I0422 11:08:00.820089   27717 provision.go:177] copyRemoteCerts
	I0422 11:08:00.820141   27717 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 11:08:00.820163   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHHostname
	I0422 11:08:00.823004   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.823324   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:00.823355   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.823557   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHPort
	I0422 11:08:00.823782   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:08:00.823963   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHUsername
	I0422 11:08:00.824108   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02/id_rsa Username:docker}
	I0422 11:08:00.905817   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0422 11:08:00.905890   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 11:08:00.934564   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0422 11:08:00.934660   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0422 11:08:00.963574   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0422 11:08:00.963651   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 11:08:00.992392   27717 provision.go:87] duration metric: took 392.523314ms to configureAuth
	I0422 11:08:00.992423   27717 buildroot.go:189] setting minikube options for container-runtime
	I0422 11:08:00.992633   27717 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:08:00.992738   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHHostname
	I0422 11:08:00.995432   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.995786   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:00.995818   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:00.995901   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHPort
	I0422 11:08:00.996092   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:08:00.996245   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:08:00.996424   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHUsername
	I0422 11:08:00.996569   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:08:00.996757   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0422 11:08:00.996783   27717 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 11:08:01.292968   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 11:08:01.292997   27717 main.go:141] libmachine: Checking connection to Docker...
	I0422 11:08:01.293008   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetURL
	I0422 11:08:01.294387   27717 main.go:141] libmachine: (ha-821265-m02) DBG | Using libvirt version 6000000
	I0422 11:08:01.296316   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:01.296702   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:01.296733   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:01.296920   27717 main.go:141] libmachine: Docker is up and running!
	I0422 11:08:01.296937   27717 main.go:141] libmachine: Reticulating splines...
	I0422 11:08:01.296943   27717 client.go:171] duration metric: took 22.664863117s to LocalClient.Create
	I0422 11:08:01.296965   27717 start.go:167] duration metric: took 22.664913115s to libmachine.API.Create "ha-821265"
	I0422 11:08:01.296973   27717 start.go:293] postStartSetup for "ha-821265-m02" (driver="kvm2")
	I0422 11:08:01.296985   27717 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 11:08:01.297007   27717 main.go:141] libmachine: (ha-821265-m02) Calling .DriverName
	I0422 11:08:01.297253   27717 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 11:08:01.297286   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHHostname
	I0422 11:08:01.299470   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:01.299782   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:01.299808   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:01.299960   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHPort
	I0422 11:08:01.300123   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:08:01.300252   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHUsername
	I0422 11:08:01.300390   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02/id_rsa Username:docker}
	I0422 11:08:01.382130   27717 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 11:08:01.387634   27717 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 11:08:01.387664   27717 filesync.go:126] Scanning /home/jenkins/minikube-integration/18711-7633/.minikube/addons for local assets ...
	I0422 11:08:01.387739   27717 filesync.go:126] Scanning /home/jenkins/minikube-integration/18711-7633/.minikube/files for local assets ...
	I0422 11:08:01.387826   27717 filesync.go:149] local asset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> 149452.pem in /etc/ssl/certs
	I0422 11:08:01.387843   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> /etc/ssl/certs/149452.pem
	I0422 11:08:01.387947   27717 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 11:08:01.399676   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem --> /etc/ssl/certs/149452.pem (1708 bytes)
	I0422 11:08:01.428041   27717 start.go:296] duration metric: took 131.053549ms for postStartSetup
	I0422 11:08:01.428101   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetConfigRaw
	I0422 11:08:01.428748   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetIP
	I0422 11:08:01.431381   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:01.431796   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:01.431827   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:01.432048   27717 profile.go:143] Saving config to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/config.json ...
	I0422 11:08:01.432302   27717 start.go:128] duration metric: took 22.818178479s to createHost
	I0422 11:08:01.432328   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHHostname
	I0422 11:08:01.434738   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:01.435058   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:01.435081   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:01.435262   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHPort
	I0422 11:08:01.435468   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:08:01.435627   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:08:01.435761   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHUsername
	I0422 11:08:01.435920   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:08:01.436075   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I0422 11:08:01.436086   27717 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 11:08:01.534505   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713784081.500955140
	
	I0422 11:08:01.534526   27717 fix.go:216] guest clock: 1713784081.500955140
	I0422 11:08:01.534533   27717 fix.go:229] Guest: 2024-04-22 11:08:01.50095514 +0000 UTC Remote: 2024-04-22 11:08:01.432317327 +0000 UTC m=+84.558643972 (delta=68.637813ms)
	I0422 11:08:01.534547   27717 fix.go:200] guest clock delta is within tolerance: 68.637813ms
	I0422 11:08:01.534552   27717 start.go:83] releasing machines lock for "ha-821265-m02", held for 22.920513101s
	I0422 11:08:01.534568   27717 main.go:141] libmachine: (ha-821265-m02) Calling .DriverName
	I0422 11:08:01.534852   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetIP
	I0422 11:08:01.537488   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:01.537820   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:01.537854   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:01.540532   27717 out.go:177] * Found network options:
	I0422 11:08:01.542100   27717 out.go:177]   - NO_PROXY=192.168.39.150
	W0422 11:08:01.543470   27717 proxy.go:119] fail to check proxy env: Error ip not in block
	I0422 11:08:01.543499   27717 main.go:141] libmachine: (ha-821265-m02) Calling .DriverName
	I0422 11:08:01.544123   27717 main.go:141] libmachine: (ha-821265-m02) Calling .DriverName
	I0422 11:08:01.544335   27717 main.go:141] libmachine: (ha-821265-m02) Calling .DriverName
	I0422 11:08:01.544433   27717 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 11:08:01.544476   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHHostname
	W0422 11:08:01.544571   27717 proxy.go:119] fail to check proxy env: Error ip not in block
	I0422 11:08:01.544644   27717 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 11:08:01.544668   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHHostname
	I0422 11:08:01.547105   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:01.547287   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:01.547479   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:01.547521   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:01.547620   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHPort
	I0422 11:08:01.547752   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:01.547778   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:01.547806   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:08:01.547913   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHPort
	I0422 11:08:01.548035   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHUsername
	I0422 11:08:01.548103   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:08:01.548174   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02/id_rsa Username:docker}
	I0422 11:08:01.548247   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHUsername
	I0422 11:08:01.548374   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02/id_rsa Username:docker}
	I0422 11:08:01.801265   27717 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 11:08:01.808839   27717 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 11:08:01.808903   27717 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 11:08:01.830039   27717 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 11:08:01.830062   27717 start.go:494] detecting cgroup driver to use...
	I0422 11:08:01.830131   27717 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 11:08:01.847745   27717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 11:08:01.864112   27717 docker.go:217] disabling cri-docker service (if available) ...
	I0422 11:08:01.864177   27717 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 11:08:01.881388   27717 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 11:08:01.896992   27717 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 11:08:02.017988   27717 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 11:08:02.153186   27717 docker.go:233] disabling docker service ...
	I0422 11:08:02.153262   27717 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 11:08:02.170314   27717 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 11:08:02.185420   27717 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 11:08:02.334674   27717 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 11:08:02.463413   27717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 11:08:02.481347   27717 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 11:08:02.505117   27717 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 11:08:02.505179   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:08:02.519887   27717 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 11:08:02.519944   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:08:02.537079   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:08:02.550183   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:08:02.562990   27717 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 11:08:02.576044   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:08:02.589791   27717 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:08:02.610609   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:08:02.623991   27717 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 11:08:02.635903   27717 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 11:08:02.635973   27717 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 11:08:02.656318   27717 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 11:08:02.669014   27717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 11:08:02.797820   27717 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 11:08:02.956094   27717 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 11:08:02.956168   27717 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 11:08:02.961822   27717 start.go:562] Will wait 60s for crictl version
	I0422 11:08:02.961880   27717 ssh_runner.go:195] Run: which crictl
	I0422 11:08:02.966471   27717 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 11:08:03.010403   27717 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 11:08:03.010494   27717 ssh_runner.go:195] Run: crio --version
	I0422 11:08:03.041054   27717 ssh_runner.go:195] Run: crio --version
	I0422 11:08:03.074458   27717 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 11:08:03.076219   27717 out.go:177]   - env NO_PROXY=192.168.39.150
	I0422 11:08:03.077542   27717 main.go:141] libmachine: (ha-821265-m02) Calling .GetIP
	I0422 11:08:03.079900   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:03.080227   27717 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:07:54 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:08:03.080266   27717 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:08:03.080466   27717 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0422 11:08:03.085519   27717 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 11:08:03.100828   27717 mustload.go:65] Loading cluster: ha-821265
	I0422 11:08:03.101095   27717 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:08:03.101347   27717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:08:03.101375   27717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:08:03.115985   27717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39533
	I0422 11:08:03.116441   27717 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:08:03.116913   27717 main.go:141] libmachine: Using API Version  1
	I0422 11:08:03.116956   27717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:08:03.117294   27717 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:08:03.117525   27717 main.go:141] libmachine: (ha-821265) Calling .GetState
	I0422 11:08:03.119157   27717 host.go:66] Checking if "ha-821265" exists ...
	I0422 11:08:03.119429   27717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:08:03.119452   27717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:08:03.133496   27717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32799
	I0422 11:08:03.133891   27717 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:08:03.134278   27717 main.go:141] libmachine: Using API Version  1
	I0422 11:08:03.134297   27717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:08:03.134660   27717 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:08:03.134853   27717 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:08:03.135044   27717 certs.go:68] Setting up /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265 for IP: 192.168.39.39
	I0422 11:08:03.135057   27717 certs.go:194] generating shared ca certs ...
	I0422 11:08:03.135073   27717 certs.go:226] acquiring lock for ca certs: {Name:mk0b77082b88c771d0b00be5267ca31dfee6f85a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:08:03.135180   27717 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key
	I0422 11:08:03.135214   27717 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key
	I0422 11:08:03.135223   27717 certs.go:256] generating profile certs ...
	I0422 11:08:03.135284   27717 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/client.key
	I0422 11:08:03.135305   27717 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key.a296c170
	I0422 11:08:03.135316   27717 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt.a296c170 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.150 192.168.39.39 192.168.39.254]
	I0422 11:08:03.278006   27717 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt.a296c170 ...
	I0422 11:08:03.278033   27717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt.a296c170: {Name:mk6c5e1350c2c2683938acc8747d6aca8f9b695f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:08:03.278219   27717 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key.a296c170 ...
	I0422 11:08:03.278237   27717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key.a296c170: {Name:mkb01e1ae1e9af5af1e53d30f02544be7ca37e5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:08:03.278324   27717 certs.go:381] copying /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt.a296c170 -> /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt
	I0422 11:08:03.278479   27717 certs.go:385] copying /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key.a296c170 -> /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key
	I0422 11:08:03.278636   27717 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.key
	I0422 11:08:03.278655   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0422 11:08:03.278672   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0422 11:08:03.278693   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0422 11:08:03.278711   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0422 11:08:03.278727   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0422 11:08:03.278745   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0422 11:08:03.278763   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0422 11:08:03.278780   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0422 11:08:03.278834   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem (1338 bytes)
	W0422 11:08:03.278872   27717 certs.go:480] ignoring /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945_empty.pem, impossibly tiny 0 bytes
	I0422 11:08:03.278885   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem (1679 bytes)
	I0422 11:08:03.278914   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem (1078 bytes)
	I0422 11:08:03.278950   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem (1123 bytes)
	I0422 11:08:03.278980   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem (1679 bytes)
	I0422 11:08:03.279038   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem (1708 bytes)
	I0422 11:08:03.279072   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:08:03.279091   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem -> /usr/share/ca-certificates/14945.pem
	I0422 11:08:03.279110   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> /usr/share/ca-certificates/149452.pem
	I0422 11:08:03.279149   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:08:03.282344   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:08:03.282743   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:08:03.282766   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:08:03.283013   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:08:03.283213   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:08:03.283375   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:08:03.283539   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:08:03.357320   27717 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0422 11:08:03.363336   27717 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0422 11:08:03.376122   27717 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0422 11:08:03.381121   27717 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0422 11:08:03.392577   27717 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0422 11:08:03.397766   27717 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0422 11:08:03.409097   27717 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0422 11:08:03.414145   27717 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0422 11:08:03.426034   27717 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0422 11:08:03.432534   27717 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0422 11:08:03.445588   27717 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0422 11:08:03.451438   27717 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0422 11:08:03.463931   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 11:08:03.492712   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 11:08:03.522027   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 11:08:03.550733   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0422 11:08:03.578488   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0422 11:08:03.606234   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0422 11:08:03.633142   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 11:08:03.661421   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 11:08:03.689994   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 11:08:03.719276   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem --> /usr/share/ca-certificates/14945.pem (1338 bytes)
	I0422 11:08:03.749292   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem --> /usr/share/ca-certificates/149452.pem (1708 bytes)
	I0422 11:08:03.778394   27717 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0422 11:08:03.797583   27717 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0422 11:08:03.817573   27717 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0422 11:08:03.838503   27717 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0422 11:08:03.857851   27717 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0422 11:08:03.878554   27717 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0422 11:08:03.898946   27717 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0422 11:08:03.918974   27717 ssh_runner.go:195] Run: openssl version
	I0422 11:08:03.925494   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149452.pem && ln -fs /usr/share/ca-certificates/149452.pem /etc/ssl/certs/149452.pem"
	I0422 11:08:03.938792   27717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149452.pem
	I0422 11:08:03.944047   27717 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 10:51 /usr/share/ca-certificates/149452.pem
	I0422 11:08:03.944113   27717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149452.pem
	I0422 11:08:03.950776   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149452.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 11:08:03.963579   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 11:08:03.976266   27717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:08:03.981506   27717 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:08:03.981564   27717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:08:03.988812   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 11:08:04.001732   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14945.pem && ln -fs /usr/share/ca-certificates/14945.pem /etc/ssl/certs/14945.pem"
	I0422 11:08:04.016188   27717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14945.pem
	I0422 11:08:04.021824   27717 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 10:51 /usr/share/ca-certificates/14945.pem
	I0422 11:08:04.021884   27717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14945.pem
	I0422 11:08:04.028182   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14945.pem /etc/ssl/certs/51391683.0"
	I0422 11:08:04.041323   27717 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 11:08:04.046091   27717 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0422 11:08:04.046146   27717 kubeadm.go:928] updating node {m02 192.168.39.39 8443 v1.30.0 crio true true} ...
	I0422 11:08:04.046227   27717 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-821265-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-821265 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 11:08:04.046268   27717 kube-vip.go:111] generating kube-vip config ...
	I0422 11:08:04.046302   27717 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0422 11:08:04.066913   27717 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0422 11:08:04.066976   27717 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0422 11:08:04.067031   27717 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 11:08:04.079006   27717 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0422 11:08:04.079071   27717 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0422 11:08:04.090862   27717 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0422 11:08:04.090893   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0422 11:08:04.090933   27717 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18711-7633/.minikube/cache/linux/amd64/v1.30.0/kubelet
	I0422 11:08:04.090963   27717 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18711-7633/.minikube/cache/linux/amd64/v1.30.0/kubeadm
	I0422 11:08:04.090969   27717 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0422 11:08:04.095975   27717 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0422 11:08:04.096002   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0422 11:08:05.465221   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0422 11:08:05.465294   27717 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0422 11:08:05.471030   27717 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0422 11:08:05.471073   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0422 11:08:05.502872   27717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:08:05.524045   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0422 11:08:05.524147   27717 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0422 11:08:05.543706   27717 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0422 11:08:05.543749   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0422 11:08:06.200090   27717 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0422 11:08:06.211906   27717 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0422 11:08:06.232327   27717 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 11:08:06.252239   27717 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0422 11:08:06.271763   27717 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0422 11:08:06.276972   27717 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 11:08:06.293357   27717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 11:08:06.434830   27717 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 11:08:06.454435   27717 host.go:66] Checking if "ha-821265" exists ...
	I0422 11:08:06.454789   27717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:08:06.454834   27717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:08:06.470668   27717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45259
	I0422 11:08:06.471092   27717 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:08:06.471574   27717 main.go:141] libmachine: Using API Version  1
	I0422 11:08:06.471598   27717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:08:06.471948   27717 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:08:06.472182   27717 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:08:06.472345   27717 start.go:316] joinCluster: &{Name:ha-821265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-821265 Namespace:defau
lt APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 11:08:06.472450   27717 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0422 11:08:06.472466   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:08:06.475406   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:08:06.475811   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:08:06.475845   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:08:06.475963   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:08:06.476141   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:08:06.476304   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:08:06.476443   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:08:06.635304   27717 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 11:08:06.635354   27717 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token z87d26.j7b7qlu8fy64qymo --discovery-token-ca-cert-hash sha256:9af0990320e7d045d6d22d7e7276e7343f7b0ca44ae792f4916b4a79d803646f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-821265-m02 --control-plane --apiserver-advertise-address=192.168.39.39 --apiserver-bind-port=8443"
	I0422 11:08:32.117047   27717 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token z87d26.j7b7qlu8fy64qymo --discovery-token-ca-cert-hash sha256:9af0990320e7d045d6d22d7e7276e7343f7b0ca44ae792f4916b4a79d803646f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-821265-m02 --control-plane --apiserver-advertise-address=192.168.39.39 --apiserver-bind-port=8443": (25.481666773s)
	I0422 11:08:32.117085   27717 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0422 11:08:32.697064   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-821265-m02 minikube.k8s.io/updated_at=2024_04_22T11_08_32_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437 minikube.k8s.io/name=ha-821265 minikube.k8s.io/primary=false
	I0422 11:08:32.865477   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-821265-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0422 11:08:33.004395   27717 start.go:318] duration metric: took 26.532045458s to joinCluster
	I0422 11:08:33.004479   27717 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 11:08:33.006372   27717 out.go:177] * Verifying Kubernetes components...
	I0422 11:08:33.004820   27717 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:08:33.007959   27717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 11:08:33.213054   27717 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 11:08:33.238317   27717 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18711-7633/kubeconfig
	I0422 11:08:33.238542   27717 kapi.go:59] client config for ha-821265: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/client.crt", KeyFile:"/home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/client.key", CAFile:"/home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0422 11:08:33.238605   27717 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.150:8443
	I0422 11:08:33.238788   27717 node_ready.go:35] waiting up to 6m0s for node "ha-821265-m02" to be "Ready" ...
	I0422 11:08:33.238893   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:33.238905   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:33.238915   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:33.238925   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:33.249378   27717 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0422 11:08:33.739962   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:33.739990   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:33.740003   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:33.740013   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:33.747289   27717 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0422 11:08:34.239456   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:34.239482   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:34.239494   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:34.239500   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:34.244012   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:08:34.739569   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:34.739587   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:34.739594   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:34.739599   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:34.743005   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:35.239308   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:35.239338   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:35.239348   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:35.239360   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:35.242933   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:35.243480   27717 node_ready.go:53] node "ha-821265-m02" has status "Ready":"False"
	I0422 11:08:35.739880   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:35.739905   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:35.739916   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:35.739923   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:35.744182   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:08:36.239509   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:36.239532   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:36.239540   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:36.239543   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:36.243219   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:36.739644   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:36.739669   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:36.739677   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:36.739679   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:36.745017   27717 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0422 11:08:37.239990   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:37.240010   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:37.240019   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:37.240022   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:37.243831   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:37.244762   27717 node_ready.go:53] node "ha-821265-m02" has status "Ready":"False"
	I0422 11:08:37.739455   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:37.739483   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:37.739493   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:37.739500   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:37.743417   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:38.239764   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:38.239788   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:38.239796   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:38.239801   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:38.243425   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:38.739666   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:38.739689   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:38.739697   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:38.739704   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:38.743497   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:39.239694   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:39.239726   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:39.239734   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:39.239737   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:39.311498   27717 round_trippers.go:574] Response Status: 200 OK in 71 milliseconds
	I0422 11:08:39.312213   27717 node_ready.go:53] node "ha-821265-m02" has status "Ready":"False"
	I0422 11:08:39.739534   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:39.739561   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:39.739572   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:39.739577   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:39.743236   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:40.239693   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:40.239722   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:40.239731   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:40.239737   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:40.243245   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:40.739236   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:40.739255   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:40.739262   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:40.739267   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:40.743188   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:41.239087   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:41.239109   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:41.239116   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:41.239120   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:41.243241   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:08:41.243875   27717 node_ready.go:49] node "ha-821265-m02" has status "Ready":"True"
	I0422 11:08:41.243892   27717 node_ready.go:38] duration metric: took 8.00507777s for node "ha-821265-m02" to be "Ready" ...
	I0422 11:08:41.243900   27717 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 11:08:41.243996   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0422 11:08:41.244012   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:41.244023   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:41.244031   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:41.250578   27717 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0422 11:08:41.257431   27717 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ft2jl" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:41.257503   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ft2jl
	I0422 11:08:41.257508   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:41.257516   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:41.257519   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:41.261931   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:08:41.263190   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:08:41.263205   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:41.263214   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:41.263221   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:41.266990   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:41.267525   27717 pod_ready.go:92] pod "coredns-7db6d8ff4d-ft2jl" in "kube-system" namespace has status "Ready":"True"
	I0422 11:08:41.267539   27717 pod_ready.go:81] duration metric: took 10.084348ms for pod "coredns-7db6d8ff4d-ft2jl" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:41.267548   27717 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ht7jl" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:41.267594   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ht7jl
	I0422 11:08:41.267601   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:41.267608   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:41.267612   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:41.283136   27717 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0422 11:08:41.283905   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:08:41.283919   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:41.283929   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:41.283937   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:41.287754   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:41.288387   27717 pod_ready.go:92] pod "coredns-7db6d8ff4d-ht7jl" in "kube-system" namespace has status "Ready":"True"
	I0422 11:08:41.288406   27717 pod_ready.go:81] duration metric: took 20.852945ms for pod "coredns-7db6d8ff4d-ht7jl" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:41.288415   27717 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-821265" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:41.288465   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265
	I0422 11:08:41.288472   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:41.288479   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:41.288484   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:41.291524   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:41.292279   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:08:41.292292   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:41.292303   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:41.292309   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:41.295532   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:41.296238   27717 pod_ready.go:92] pod "etcd-ha-821265" in "kube-system" namespace has status "Ready":"True"
	I0422 11:08:41.296258   27717 pod_ready.go:81] duration metric: took 7.834312ms for pod "etcd-ha-821265" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:41.296266   27717 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-821265-m02" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:41.296325   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m02
	I0422 11:08:41.296335   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:41.296343   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:41.296348   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:41.299964   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:41.301465   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:41.301479   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:41.301488   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:41.301493   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:41.304174   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:08:41.797164   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m02
	I0422 11:08:41.797186   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:41.797194   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:41.797214   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:41.801288   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:08:41.802038   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:41.802054   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:41.802061   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:41.802065   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:41.804980   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:08:42.296631   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m02
	I0422 11:08:42.296655   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:42.296663   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:42.296667   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:42.300192   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:42.300858   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:42.300871   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:42.300877   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:42.300881   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:42.303625   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:08:42.797421   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m02
	I0422 11:08:42.797440   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:42.797449   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:42.797452   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:42.806273   27717 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0422 11:08:42.807009   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:42.807027   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:42.807038   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:42.807045   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:42.810179   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:43.297029   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m02
	I0422 11:08:43.297055   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:43.297067   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:43.297073   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:43.300723   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:43.301660   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:43.301674   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:43.301680   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:43.301683   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:43.304321   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:08:43.304880   27717 pod_ready.go:102] pod "etcd-ha-821265-m02" in "kube-system" namespace has status "Ready":"False"
	I0422 11:08:43.796822   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m02
	I0422 11:08:43.796843   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:43.796851   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:43.796855   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:43.800291   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:43.800956   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:43.800973   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:43.800983   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:43.800988   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:43.803395   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:08:44.297325   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m02
	I0422 11:08:44.297352   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:44.297363   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:44.297369   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:44.301457   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:08:44.302129   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:44.302144   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:44.302152   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:44.302158   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:44.305097   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:08:44.797092   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m02
	I0422 11:08:44.797114   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:44.797121   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:44.797125   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:44.800678   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:44.801608   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:44.801623   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:44.801629   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:44.801632   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:44.804656   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:45.296805   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m02
	I0422 11:08:45.296830   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:45.296839   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:45.296844   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:45.301027   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:08:45.301971   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:45.301988   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:45.301995   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:45.301998   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:45.305320   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:45.305908   27717 pod_ready.go:102] pod "etcd-ha-821265-m02" in "kube-system" namespace has status "Ready":"False"
	I0422 11:08:45.797336   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m02
	I0422 11:08:45.797361   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:45.797372   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:45.797379   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:45.801701   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:08:45.802294   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:45.802311   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:45.802317   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:45.802323   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:45.805667   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:45.806539   27717 pod_ready.go:92] pod "etcd-ha-821265-m02" in "kube-system" namespace has status "Ready":"True"
	I0422 11:08:45.806561   27717 pod_ready.go:81] duration metric: took 4.510288487s for pod "etcd-ha-821265-m02" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:45.806580   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-821265" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:45.806649   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-821265
	I0422 11:08:45.806660   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:45.806671   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:45.806681   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:45.810462   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:45.811135   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:08:45.811149   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:45.811156   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:45.811160   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:45.814298   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:45.818487   27717 pod_ready.go:92] pod "kube-apiserver-ha-821265" in "kube-system" namespace has status "Ready":"True"
	I0422 11:08:45.818505   27717 pod_ready.go:81] duration metric: took 11.913247ms for pod "kube-apiserver-ha-821265" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:45.818514   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-821265-m02" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:45.818578   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-821265-m02
	I0422 11:08:45.818588   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:45.818596   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:45.818600   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:45.822562   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:45.823295   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:45.823307   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:45.823314   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:45.823318   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:45.828332   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:08:46.318942   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-821265-m02
	I0422 11:08:46.318962   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:46.318970   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:46.318977   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:46.322350   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:46.323120   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:46.323134   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:46.323142   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:46.323146   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:46.325549   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:08:46.819125   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-821265-m02
	I0422 11:08:46.819144   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:46.819152   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:46.819155   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:46.823840   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:08:46.824766   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:46.824801   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:46.824813   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:46.824818   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:46.828927   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:08:47.318760   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-821265-m02
	I0422 11:08:47.318788   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:47.318799   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:47.318805   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:47.322364   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:47.323127   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:47.323145   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:47.323152   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:47.323158   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:47.326356   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:47.819584   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-821265-m02
	I0422 11:08:47.819607   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:47.819615   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:47.819619   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:47.823835   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:08:47.824816   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:47.824833   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:47.824841   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:47.824845   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:47.827487   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:08:47.828054   27717 pod_ready.go:102] pod "kube-apiserver-ha-821265-m02" in "kube-system" namespace has status "Ready":"False"
	I0422 11:08:48.319273   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-821265-m02
	I0422 11:08:48.319298   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:48.319317   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:48.319325   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:48.323004   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:48.324098   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:48.324115   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:48.324125   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:48.324130   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:48.327545   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:48.818809   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-821265-m02
	I0422 11:08:48.818832   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:48.818839   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:48.818842   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:48.822692   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:48.823429   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:48.823452   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:48.823461   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:48.823466   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:48.826653   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:49.319641   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-821265-m02
	I0422 11:08:49.319671   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:49.319682   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:49.319686   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:49.323515   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:49.324375   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:49.324394   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:49.324405   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:49.324410   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:49.327996   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:49.328647   27717 pod_ready.go:92] pod "kube-apiserver-ha-821265-m02" in "kube-system" namespace has status "Ready":"True"
	I0422 11:08:49.328667   27717 pod_ready.go:81] duration metric: took 3.510146972s for pod "kube-apiserver-ha-821265-m02" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:49.328677   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-821265" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:49.328737   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-821265
	I0422 11:08:49.328741   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:49.328748   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:49.328752   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:49.331952   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:49.332891   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:08:49.332908   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:49.332916   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:49.332920   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:49.335564   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:08:49.336180   27717 pod_ready.go:92] pod "kube-controller-manager-ha-821265" in "kube-system" namespace has status "Ready":"True"
	I0422 11:08:49.336208   27717 pod_ready.go:81] duration metric: took 7.523243ms for pod "kube-controller-manager-ha-821265" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:49.336222   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-821265-m02" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:49.336291   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-821265-m02
	I0422 11:08:49.336304   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:49.336313   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:49.336318   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:49.339488   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:49.340156   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:49.340172   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:49.340179   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:49.340183   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:49.343204   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:49.343915   27717 pod_ready.go:92] pod "kube-controller-manager-ha-821265-m02" in "kube-system" namespace has status "Ready":"True"
	I0422 11:08:49.343936   27717 pod_ready.go:81] duration metric: took 7.706743ms for pod "kube-controller-manager-ha-821265-m02" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:49.343946   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j2hpk" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:49.439213   27717 request.go:629] Waited for 95.204097ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j2hpk
	I0422 11:08:49.439299   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j2hpk
	I0422 11:08:49.439312   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:49.439322   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:49.439332   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:49.443409   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:08:49.639154   27717 request.go:629] Waited for 194.343471ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:49.639214   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:49.639220   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:49.639228   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:49.639231   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:49.643437   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:08:49.644072   27717 pod_ready.go:92] pod "kube-proxy-j2hpk" in "kube-system" namespace has status "Ready":"True"
	I0422 11:08:49.644094   27717 pod_ready.go:81] duration metric: took 300.14016ms for pod "kube-proxy-j2hpk" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:49.644108   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w7r9d" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:49.839545   27717 request.go:629] Waited for 195.375525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w7r9d
	I0422 11:08:49.839617   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w7r9d
	I0422 11:08:49.839623   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:49.839630   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:49.839634   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:49.843443   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:50.039830   27717 request.go:629] Waited for 195.191671ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:08:50.039924   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:08:50.039934   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:50.039946   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:50.039958   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:50.043198   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:50.044101   27717 pod_ready.go:92] pod "kube-proxy-w7r9d" in "kube-system" namespace has status "Ready":"True"
	I0422 11:08:50.044119   27717 pod_ready.go:81] duration metric: took 400.00501ms for pod "kube-proxy-w7r9d" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:50.044128   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-821265" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:50.239411   27717 request.go:629] Waited for 195.20436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-821265
	I0422 11:08:50.239481   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-821265
	I0422 11:08:50.239492   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:50.239501   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:50.239510   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:50.243228   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:50.439633   27717 request.go:629] Waited for 195.390191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:08:50.439708   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:08:50.439717   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:50.439725   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:50.439734   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:50.444259   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:08:50.444881   27717 pod_ready.go:92] pod "kube-scheduler-ha-821265" in "kube-system" namespace has status "Ready":"True"
	I0422 11:08:50.444900   27717 pod_ready.go:81] duration metric: took 400.765645ms for pod "kube-scheduler-ha-821265" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:50.444909   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-821265-m02" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:50.639902   27717 request.go:629] Waited for 194.938684ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-821265-m02
	I0422 11:08:50.639970   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-821265-m02
	I0422 11:08:50.639976   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:50.639987   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:50.639998   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:50.643883   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:50.840108   27717 request.go:629] Waited for 195.435349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:50.840212   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:08:50.840231   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:50.840242   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:50.840250   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:50.843620   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:50.845161   27717 pod_ready.go:92] pod "kube-scheduler-ha-821265-m02" in "kube-system" namespace has status "Ready":"True"
	I0422 11:08:50.845179   27717 pod_ready.go:81] duration metric: took 400.263918ms for pod "kube-scheduler-ha-821265-m02" in "kube-system" namespace to be "Ready" ...
	I0422 11:08:50.845190   27717 pod_ready.go:38] duration metric: took 9.601243901s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 11:08:50.845203   27717 api_server.go:52] waiting for apiserver process to appear ...
	I0422 11:08:50.845258   27717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 11:08:50.862092   27717 api_server.go:72] duration metric: took 17.857570443s to wait for apiserver process to appear ...
	I0422 11:08:50.862115   27717 api_server.go:88] waiting for apiserver healthz status ...
	I0422 11:08:50.862131   27717 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I0422 11:08:50.868932   27717 api_server.go:279] https://192.168.39.150:8443/healthz returned 200:
	ok
	I0422 11:08:50.869006   27717 round_trippers.go:463] GET https://192.168.39.150:8443/version
	I0422 11:08:50.869018   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:50.869028   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:50.869035   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:50.869991   27717 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0422 11:08:50.870124   27717 api_server.go:141] control plane version: v1.30.0
	I0422 11:08:50.870142   27717 api_server.go:131] duration metric: took 8.020804ms to wait for apiserver health ...
	I0422 11:08:50.870151   27717 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 11:08:51.039531   27717 request.go:629] Waited for 169.318698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0422 11:08:51.039579   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0422 11:08:51.039586   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:51.039593   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:51.039598   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:51.046315   27717 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0422 11:08:51.052196   27717 system_pods.go:59] 17 kube-system pods found
	I0422 11:08:51.052232   27717 system_pods.go:61] "coredns-7db6d8ff4d-ft2jl" [09e14815-b8e9-4b60-9b2c-c7d86cccb594] Running
	I0422 11:08:51.052237   27717 system_pods.go:61] "coredns-7db6d8ff4d-ht7jl" [c404a830-ddce-4c49-9e54-05d45871b4b0] Running
	I0422 11:08:51.052240   27717 system_pods.go:61] "etcd-ha-821265" [1a27ab5d-19af-49d9-8eb3-e50b7e2225a5] Running
	I0422 11:08:51.052243   27717 system_pods.go:61] "etcd-ha-821265-m02" [4ba0de26-81d6-423b-a5a4-9fd88c90ebdc] Running
	I0422 11:08:51.052246   27717 system_pods.go:61] "kindnet-jm2pd" [0550a9db-b106-4ac4-9976-118d80927509] Running
	I0422 11:08:51.052249   27717 system_pods.go:61] "kindnet-qbq9z" [9751a17f-e26b-4ba8-81ce-077103c0aa1c] Running
	I0422 11:08:51.052252   27717 system_pods.go:61] "kube-apiserver-ha-821265" [1e20fb49-c54d-49fd-900b-38e347a52f9a] Running
	I0422 11:08:51.052254   27717 system_pods.go:61] "kube-apiserver-ha-821265-m02" [95616042-7a05-4fc3-a1ef-7fd56c8b3cd8] Running
	I0422 11:08:51.052258   27717 system_pods.go:61] "kube-controller-manager-ha-821265" [51933fc1-af7c-4fb0-b811-b6312f4b4d29] Running
	I0422 11:08:51.052260   27717 system_pods.go:61] "kube-controller-manager-ha-821265-m02" [4af2c432-4c7c-4f1f-98da-34af2648d7db] Running
	I0422 11:08:51.052263   27717 system_pods.go:61] "kube-proxy-j2hpk" [3ebf4ab0-bc76-4f5c-916e-6b28a81dc031] Running
	I0422 11:08:51.052266   27717 system_pods.go:61] "kube-proxy-w7r9d" [56a4f7fc-5ce0-4d77-b30f-9d39cded457c] Running
	I0422 11:08:51.052269   27717 system_pods.go:61] "kube-scheduler-ha-821265" [929e0c00-c49a-4b96-8f6a-7a84ae4f117c] Running
	I0422 11:08:51.052272   27717 system_pods.go:61] "kube-scheduler-ha-821265-m02" [589c30c7-d9df-4745-bdb3-87ae02ab2b67] Running
	I0422 11:08:51.052274   27717 system_pods.go:61] "kube-vip-ha-821265" [9322f0ee-9e3e-4585-9388-44ccd1417371] Running
	I0422 11:08:51.052277   27717 system_pods.go:61] "kube-vip-ha-821265-m02" [466697de-7dbe-4e6c-be95-9463a9548cde] Running
	I0422 11:08:51.052280   27717 system_pods.go:61] "storage-provisioner" [4b44da93-f3fa-49b7-a701-5ab7a430374f] Running
	I0422 11:08:51.052285   27717 system_pods.go:74] duration metric: took 182.128313ms to wait for pod list to return data ...
	I0422 11:08:51.052292   27717 default_sa.go:34] waiting for default service account to be created ...
	I0422 11:08:51.239721   27717 request.go:629] Waited for 187.364826ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/default/serviceaccounts
	I0422 11:08:51.239797   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/default/serviceaccounts
	I0422 11:08:51.239811   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:51.239821   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:51.239829   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:51.243700   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:08:51.243954   27717 default_sa.go:45] found service account: "default"
	I0422 11:08:51.243974   27717 default_sa.go:55] duration metric: took 191.676706ms for default service account to be created ...
	I0422 11:08:51.243982   27717 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 11:08:51.439120   27717 request.go:629] Waited for 195.06203ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0422 11:08:51.439184   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0422 11:08:51.439190   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:51.439197   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:51.439201   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:51.445273   27717 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0422 11:08:51.450063   27717 system_pods.go:86] 17 kube-system pods found
	I0422 11:08:51.450088   27717 system_pods.go:89] "coredns-7db6d8ff4d-ft2jl" [09e14815-b8e9-4b60-9b2c-c7d86cccb594] Running
	I0422 11:08:51.450094   27717 system_pods.go:89] "coredns-7db6d8ff4d-ht7jl" [c404a830-ddce-4c49-9e54-05d45871b4b0] Running
	I0422 11:08:51.450098   27717 system_pods.go:89] "etcd-ha-821265" [1a27ab5d-19af-49d9-8eb3-e50b7e2225a5] Running
	I0422 11:08:51.450103   27717 system_pods.go:89] "etcd-ha-821265-m02" [4ba0de26-81d6-423b-a5a4-9fd88c90ebdc] Running
	I0422 11:08:51.450107   27717 system_pods.go:89] "kindnet-jm2pd" [0550a9db-b106-4ac4-9976-118d80927509] Running
	I0422 11:08:51.450111   27717 system_pods.go:89] "kindnet-qbq9z" [9751a17f-e26b-4ba8-81ce-077103c0aa1c] Running
	I0422 11:08:51.450115   27717 system_pods.go:89] "kube-apiserver-ha-821265" [1e20fb49-c54d-49fd-900b-38e347a52f9a] Running
	I0422 11:08:51.450119   27717 system_pods.go:89] "kube-apiserver-ha-821265-m02" [95616042-7a05-4fc3-a1ef-7fd56c8b3cd8] Running
	I0422 11:08:51.450123   27717 system_pods.go:89] "kube-controller-manager-ha-821265" [51933fc1-af7c-4fb0-b811-b6312f4b4d29] Running
	I0422 11:08:51.450130   27717 system_pods.go:89] "kube-controller-manager-ha-821265-m02" [4af2c432-4c7c-4f1f-98da-34af2648d7db] Running
	I0422 11:08:51.450134   27717 system_pods.go:89] "kube-proxy-j2hpk" [3ebf4ab0-bc76-4f5c-916e-6b28a81dc031] Running
	I0422 11:08:51.450141   27717 system_pods.go:89] "kube-proxy-w7r9d" [56a4f7fc-5ce0-4d77-b30f-9d39cded457c] Running
	I0422 11:08:51.450145   27717 system_pods.go:89] "kube-scheduler-ha-821265" [929e0c00-c49a-4b96-8f6a-7a84ae4f117c] Running
	I0422 11:08:51.450151   27717 system_pods.go:89] "kube-scheduler-ha-821265-m02" [589c30c7-d9df-4745-bdb3-87ae02ab2b67] Running
	I0422 11:08:51.450155   27717 system_pods.go:89] "kube-vip-ha-821265" [9322f0ee-9e3e-4585-9388-44ccd1417371] Running
	I0422 11:08:51.450162   27717 system_pods.go:89] "kube-vip-ha-821265-m02" [466697de-7dbe-4e6c-be95-9463a9548cde] Running
	I0422 11:08:51.450167   27717 system_pods.go:89] "storage-provisioner" [4b44da93-f3fa-49b7-a701-5ab7a430374f] Running
	I0422 11:08:51.450176   27717 system_pods.go:126] duration metric: took 206.186469ms to wait for k8s-apps to be running ...
	I0422 11:08:51.450184   27717 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 11:08:51.450235   27717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:08:51.466354   27717 system_svc.go:56] duration metric: took 16.160874ms WaitForService to wait for kubelet
	I0422 11:08:51.466383   27717 kubeadm.go:576] duration metric: took 18.461863443s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 11:08:51.466405   27717 node_conditions.go:102] verifying NodePressure condition ...
	I0422 11:08:51.640057   27717 request.go:629] Waited for 173.571533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes
	I0422 11:08:51.640104   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes
	I0422 11:08:51.640109   27717 round_trippers.go:469] Request Headers:
	I0422 11:08:51.640116   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:08:51.640119   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:08:51.645262   27717 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0422 11:08:51.646980   27717 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 11:08:51.647003   27717 node_conditions.go:123] node cpu capacity is 2
	I0422 11:08:51.647016   27717 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 11:08:51.647021   27717 node_conditions.go:123] node cpu capacity is 2
	I0422 11:08:51.647026   27717 node_conditions.go:105] duration metric: took 180.615876ms to run NodePressure ...
	I0422 11:08:51.647041   27717 start.go:240] waiting for startup goroutines ...
	I0422 11:08:51.647076   27717 start.go:254] writing updated cluster config ...
	I0422 11:08:51.649362   27717 out.go:177] 
	I0422 11:08:51.651069   27717 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:08:51.651185   27717 profile.go:143] Saving config to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/config.json ...
	I0422 11:08:51.652866   27717 out.go:177] * Starting "ha-821265-m03" control-plane node in "ha-821265" cluster
	I0422 11:08:51.654285   27717 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 11:08:51.654313   27717 cache.go:56] Caching tarball of preloaded images
	I0422 11:08:51.654406   27717 preload.go:173] Found /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 11:08:51.654419   27717 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 11:08:51.654510   27717 profile.go:143] Saving config to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/config.json ...
	I0422 11:08:51.654680   27717 start.go:360] acquireMachinesLock for ha-821265-m03: {Name:mk5cb9b294e703b264c1f97ac968ffd01e93b576 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 11:08:51.654736   27717 start.go:364] duration metric: took 34.256µs to acquireMachinesLock for "ha-821265-m03"
	I0422 11:08:51.654762   27717 start.go:93] Provisioning new machine with config: &{Name:ha-821265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:h
a-821265 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fals
e istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 11:08:51.654873   27717 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0422 11:08:51.656529   27717 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0422 11:08:51.656614   27717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:08:51.656648   27717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:08:51.671283   27717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41653
	I0422 11:08:51.671735   27717 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:08:51.672165   27717 main.go:141] libmachine: Using API Version  1
	I0422 11:08:51.672182   27717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:08:51.672539   27717 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:08:51.672749   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetMachineName
	I0422 11:08:51.672936   27717 main.go:141] libmachine: (ha-821265-m03) Calling .DriverName
	I0422 11:08:51.673119   27717 start.go:159] libmachine.API.Create for "ha-821265" (driver="kvm2")
	I0422 11:08:51.673147   27717 client.go:168] LocalClient.Create starting
	I0422 11:08:51.673180   27717 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem
	I0422 11:08:51.673219   27717 main.go:141] libmachine: Decoding PEM data...
	I0422 11:08:51.673235   27717 main.go:141] libmachine: Parsing certificate...
	I0422 11:08:51.673297   27717 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem
	I0422 11:08:51.673318   27717 main.go:141] libmachine: Decoding PEM data...
	I0422 11:08:51.673334   27717 main.go:141] libmachine: Parsing certificate...
	I0422 11:08:51.673359   27717 main.go:141] libmachine: Running pre-create checks...
	I0422 11:08:51.673370   27717 main.go:141] libmachine: (ha-821265-m03) Calling .PreCreateCheck
	I0422 11:08:51.673549   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetConfigRaw
	I0422 11:08:51.673962   27717 main.go:141] libmachine: Creating machine...
	I0422 11:08:51.673978   27717 main.go:141] libmachine: (ha-821265-m03) Calling .Create
	I0422 11:08:51.674114   27717 main.go:141] libmachine: (ha-821265-m03) Creating KVM machine...
	I0422 11:08:51.675559   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found existing default KVM network
	I0422 11:08:51.675687   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found existing private KVM network mk-ha-821265
	I0422 11:08:51.675828   27717 main.go:141] libmachine: (ha-821265-m03) Setting up store path in /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03 ...
	I0422 11:08:51.675851   27717 main.go:141] libmachine: (ha-821265-m03) Building disk image from file:///home/jenkins/minikube-integration/18711-7633/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0422 11:08:51.675925   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:08:51.675807   28516 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 11:08:51.675993   27717 main.go:141] libmachine: (ha-821265-m03) Downloading /home/jenkins/minikube-integration/18711-7633/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18711-7633/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0422 11:08:51.886984   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:08:51.886854   28516 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03/id_rsa...
	I0422 11:08:52.024651   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:08:52.024529   28516 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03/ha-821265-m03.rawdisk...
	I0422 11:08:52.024687   27717 main.go:141] libmachine: (ha-821265-m03) DBG | Writing magic tar header
	I0422 11:08:52.024703   27717 main.go:141] libmachine: (ha-821265-m03) DBG | Writing SSH key tar header
	I0422 11:08:52.024721   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:08:52.024685   28516 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03 ...
	I0422 11:08:52.024903   27717 main.go:141] libmachine: (ha-821265-m03) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03 (perms=drwx------)
	I0422 11:08:52.024924   27717 main.go:141] libmachine: (ha-821265-m03) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633/.minikube/machines (perms=drwxr-xr-x)
	I0422 11:08:52.024934   27717 main.go:141] libmachine: (ha-821265-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03
	I0422 11:08:52.024944   27717 main.go:141] libmachine: (ha-821265-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633/.minikube/machines
	I0422 11:08:52.024955   27717 main.go:141] libmachine: (ha-821265-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 11:08:52.024963   27717 main.go:141] libmachine: (ha-821265-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633
	I0422 11:08:52.024978   27717 main.go:141] libmachine: (ha-821265-m03) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633/.minikube (perms=drwxr-xr-x)
	I0422 11:08:52.024987   27717 main.go:141] libmachine: (ha-821265-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0422 11:08:52.024995   27717 main.go:141] libmachine: (ha-821265-m03) DBG | Checking permissions on dir: /home/jenkins
	I0422 11:08:52.025003   27717 main.go:141] libmachine: (ha-821265-m03) DBG | Checking permissions on dir: /home
	I0422 11:08:52.025012   27717 main.go:141] libmachine: (ha-821265-m03) DBG | Skipping /home - not owner
	I0422 11:08:52.025024   27717 main.go:141] libmachine: (ha-821265-m03) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633 (perms=drwxrwxr-x)
	I0422 11:08:52.025033   27717 main.go:141] libmachine: (ha-821265-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0422 11:08:52.025042   27717 main.go:141] libmachine: (ha-821265-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0422 11:08:52.025050   27717 main.go:141] libmachine: (ha-821265-m03) Creating domain...
	I0422 11:08:52.025930   27717 main.go:141] libmachine: (ha-821265-m03) define libvirt domain using xml: 
	I0422 11:08:52.025952   27717 main.go:141] libmachine: (ha-821265-m03) <domain type='kvm'>
	I0422 11:08:52.025962   27717 main.go:141] libmachine: (ha-821265-m03)   <name>ha-821265-m03</name>
	I0422 11:08:52.025970   27717 main.go:141] libmachine: (ha-821265-m03)   <memory unit='MiB'>2200</memory>
	I0422 11:08:52.025979   27717 main.go:141] libmachine: (ha-821265-m03)   <vcpu>2</vcpu>
	I0422 11:08:52.025990   27717 main.go:141] libmachine: (ha-821265-m03)   <features>
	I0422 11:08:52.025999   27717 main.go:141] libmachine: (ha-821265-m03)     <acpi/>
	I0422 11:08:52.026010   27717 main.go:141] libmachine: (ha-821265-m03)     <apic/>
	I0422 11:08:52.026029   27717 main.go:141] libmachine: (ha-821265-m03)     <pae/>
	I0422 11:08:52.026044   27717 main.go:141] libmachine: (ha-821265-m03)     
	I0422 11:08:52.026056   27717 main.go:141] libmachine: (ha-821265-m03)   </features>
	I0422 11:08:52.026067   27717 main.go:141] libmachine: (ha-821265-m03)   <cpu mode='host-passthrough'>
	I0422 11:08:52.026078   27717 main.go:141] libmachine: (ha-821265-m03)   
	I0422 11:08:52.026088   27717 main.go:141] libmachine: (ha-821265-m03)   </cpu>
	I0422 11:08:52.026098   27717 main.go:141] libmachine: (ha-821265-m03)   <os>
	I0422 11:08:52.026113   27717 main.go:141] libmachine: (ha-821265-m03)     <type>hvm</type>
	I0422 11:08:52.026126   27717 main.go:141] libmachine: (ha-821265-m03)     <boot dev='cdrom'/>
	I0422 11:08:52.026137   27717 main.go:141] libmachine: (ha-821265-m03)     <boot dev='hd'/>
	I0422 11:08:52.026147   27717 main.go:141] libmachine: (ha-821265-m03)     <bootmenu enable='no'/>
	I0422 11:08:52.026157   27717 main.go:141] libmachine: (ha-821265-m03)   </os>
	I0422 11:08:52.026166   27717 main.go:141] libmachine: (ha-821265-m03)   <devices>
	I0422 11:08:52.026182   27717 main.go:141] libmachine: (ha-821265-m03)     <disk type='file' device='cdrom'>
	I0422 11:08:52.026200   27717 main.go:141] libmachine: (ha-821265-m03)       <source file='/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03/boot2docker.iso'/>
	I0422 11:08:52.026212   27717 main.go:141] libmachine: (ha-821265-m03)       <target dev='hdc' bus='scsi'/>
	I0422 11:08:52.026222   27717 main.go:141] libmachine: (ha-821265-m03)       <readonly/>
	I0422 11:08:52.026231   27717 main.go:141] libmachine: (ha-821265-m03)     </disk>
	I0422 11:08:52.026244   27717 main.go:141] libmachine: (ha-821265-m03)     <disk type='file' device='disk'>
	I0422 11:08:52.026261   27717 main.go:141] libmachine: (ha-821265-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0422 11:08:52.026278   27717 main.go:141] libmachine: (ha-821265-m03)       <source file='/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03/ha-821265-m03.rawdisk'/>
	I0422 11:08:52.026289   27717 main.go:141] libmachine: (ha-821265-m03)       <target dev='hda' bus='virtio'/>
	I0422 11:08:52.026302   27717 main.go:141] libmachine: (ha-821265-m03)     </disk>
	I0422 11:08:52.026313   27717 main.go:141] libmachine: (ha-821265-m03)     <interface type='network'>
	I0422 11:08:52.026338   27717 main.go:141] libmachine: (ha-821265-m03)       <source network='mk-ha-821265'/>
	I0422 11:08:52.026354   27717 main.go:141] libmachine: (ha-821265-m03)       <model type='virtio'/>
	I0422 11:08:52.026362   27717 main.go:141] libmachine: (ha-821265-m03)     </interface>
	I0422 11:08:52.026375   27717 main.go:141] libmachine: (ha-821265-m03)     <interface type='network'>
	I0422 11:08:52.026385   27717 main.go:141] libmachine: (ha-821265-m03)       <source network='default'/>
	I0422 11:08:52.026393   27717 main.go:141] libmachine: (ha-821265-m03)       <model type='virtio'/>
	I0422 11:08:52.026400   27717 main.go:141] libmachine: (ha-821265-m03)     </interface>
	I0422 11:08:52.026408   27717 main.go:141] libmachine: (ha-821265-m03)     <serial type='pty'>
	I0422 11:08:52.026414   27717 main.go:141] libmachine: (ha-821265-m03)       <target port='0'/>
	I0422 11:08:52.026421   27717 main.go:141] libmachine: (ha-821265-m03)     </serial>
	I0422 11:08:52.026428   27717 main.go:141] libmachine: (ha-821265-m03)     <console type='pty'>
	I0422 11:08:52.026436   27717 main.go:141] libmachine: (ha-821265-m03)       <target type='serial' port='0'/>
	I0422 11:08:52.026469   27717 main.go:141] libmachine: (ha-821265-m03)     </console>
	I0422 11:08:52.026493   27717 main.go:141] libmachine: (ha-821265-m03)     <rng model='virtio'>
	I0422 11:08:52.026509   27717 main.go:141] libmachine: (ha-821265-m03)       <backend model='random'>/dev/random</backend>
	I0422 11:08:52.026518   27717 main.go:141] libmachine: (ha-821265-m03)     </rng>
	I0422 11:08:52.026527   27717 main.go:141] libmachine: (ha-821265-m03)     
	I0422 11:08:52.026535   27717 main.go:141] libmachine: (ha-821265-m03)     
	I0422 11:08:52.026543   27717 main.go:141] libmachine: (ha-821265-m03)   </devices>
	I0422 11:08:52.026554   27717 main.go:141] libmachine: (ha-821265-m03) </domain>
	I0422 11:08:52.026562   27717 main.go:141] libmachine: (ha-821265-m03) 
	I0422 11:08:52.033440   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:42:52:d4 in network default
	I0422 11:08:52.033919   27717 main.go:141] libmachine: (ha-821265-m03) Ensuring networks are active...
	I0422 11:08:52.033939   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:08:52.034637   27717 main.go:141] libmachine: (ha-821265-m03) Ensuring network default is active
	I0422 11:08:52.034969   27717 main.go:141] libmachine: (ha-821265-m03) Ensuring network mk-ha-821265 is active
	I0422 11:08:52.035313   27717 main.go:141] libmachine: (ha-821265-m03) Getting domain xml...
	I0422 11:08:52.036058   27717 main.go:141] libmachine: (ha-821265-m03) Creating domain...
	I0422 11:08:53.244492   27717 main.go:141] libmachine: (ha-821265-m03) Waiting to get IP...
	I0422 11:08:53.245385   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:08:53.245793   27717 main.go:141] libmachine: (ha-821265-m03) DBG | unable to find current IP address of domain ha-821265-m03 in network mk-ha-821265
	I0422 11:08:53.245819   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:08:53.245787   28516 retry.go:31] will retry after 234.374116ms: waiting for machine to come up
	I0422 11:08:53.482189   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:08:53.482648   27717 main.go:141] libmachine: (ha-821265-m03) DBG | unable to find current IP address of domain ha-821265-m03 in network mk-ha-821265
	I0422 11:08:53.482685   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:08:53.482606   28516 retry.go:31] will retry after 381.567774ms: waiting for machine to come up
	I0422 11:08:53.866209   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:08:53.866689   27717 main.go:141] libmachine: (ha-821265-m03) DBG | unable to find current IP address of domain ha-821265-m03 in network mk-ha-821265
	I0422 11:08:53.866720   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:08:53.866656   28516 retry.go:31] will retry after 479.573791ms: waiting for machine to come up
	I0422 11:08:54.347782   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:08:54.348239   27717 main.go:141] libmachine: (ha-821265-m03) DBG | unable to find current IP address of domain ha-821265-m03 in network mk-ha-821265
	I0422 11:08:54.348260   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:08:54.348185   28516 retry.go:31] will retry after 396.163013ms: waiting for machine to come up
	I0422 11:08:54.745906   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:08:54.746940   27717 main.go:141] libmachine: (ha-821265-m03) DBG | unable to find current IP address of domain ha-821265-m03 in network mk-ha-821265
	I0422 11:08:54.747002   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:08:54.746880   28516 retry.go:31] will retry after 604.728808ms: waiting for machine to come up
	I0422 11:08:55.352872   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:08:55.353362   27717 main.go:141] libmachine: (ha-821265-m03) DBG | unable to find current IP address of domain ha-821265-m03 in network mk-ha-821265
	I0422 11:08:55.353396   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:08:55.353311   28516 retry.go:31] will retry after 577.189213ms: waiting for machine to come up
	I0422 11:08:55.931772   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:08:55.932234   27717 main.go:141] libmachine: (ha-821265-m03) DBG | unable to find current IP address of domain ha-821265-m03 in network mk-ha-821265
	I0422 11:08:55.932268   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:08:55.932166   28516 retry.go:31] will retry after 1.115081687s: waiting for machine to come up
	I0422 11:08:57.050105   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:08:57.050983   27717 main.go:141] libmachine: (ha-821265-m03) DBG | unable to find current IP address of domain ha-821265-m03 in network mk-ha-821265
	I0422 11:08:57.051025   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:08:57.050956   28516 retry.go:31] will retry after 944.628006ms: waiting for machine to come up
	I0422 11:08:57.996698   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:08:57.997154   27717 main.go:141] libmachine: (ha-821265-m03) DBG | unable to find current IP address of domain ha-821265-m03 in network mk-ha-821265
	I0422 11:08:57.997179   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:08:57.997109   28516 retry.go:31] will retry after 1.130350135s: waiting for machine to come up
	I0422 11:08:59.129494   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:08:59.130069   27717 main.go:141] libmachine: (ha-821265-m03) DBG | unable to find current IP address of domain ha-821265-m03 in network mk-ha-821265
	I0422 11:08:59.130089   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:08:59.130031   28516 retry.go:31] will retry after 1.837856027s: waiting for machine to come up
	I0422 11:09:00.969944   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:00.970400   27717 main.go:141] libmachine: (ha-821265-m03) DBG | unable to find current IP address of domain ha-821265-m03 in network mk-ha-821265
	I0422 11:09:00.970424   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:09:00.970372   28516 retry.go:31] will retry after 1.911594615s: waiting for machine to come up
	I0422 11:09:02.884148   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:02.884548   27717 main.go:141] libmachine: (ha-821265-m03) DBG | unable to find current IP address of domain ha-821265-m03 in network mk-ha-821265
	I0422 11:09:02.884588   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:09:02.884543   28516 retry.go:31] will retry after 3.346493159s: waiting for machine to come up
	I0422 11:09:06.233823   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:06.234193   27717 main.go:141] libmachine: (ha-821265-m03) DBG | unable to find current IP address of domain ha-821265-m03 in network mk-ha-821265
	I0422 11:09:06.234218   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:09:06.234169   28516 retry.go:31] will retry after 4.176571643s: waiting for machine to come up
	I0422 11:09:10.414050   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:10.414515   27717 main.go:141] libmachine: (ha-821265-m03) DBG | unable to find current IP address of domain ha-821265-m03 in network mk-ha-821265
	I0422 11:09:10.414544   27717 main.go:141] libmachine: (ha-821265-m03) DBG | I0422 11:09:10.414468   28516 retry.go:31] will retry after 4.838574881s: waiting for machine to come up
	I0422 11:09:15.257405   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:15.257875   27717 main.go:141] libmachine: (ha-821265-m03) Found IP for machine: 192.168.39.95
	I0422 11:09:15.257895   27717 main.go:141] libmachine: (ha-821265-m03) Reserving static IP address...
	I0422 11:09:15.257908   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has current primary IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:15.258261   27717 main.go:141] libmachine: (ha-821265-m03) DBG | unable to find host DHCP lease matching {name: "ha-821265-m03", mac: "52:54:00:24:8e:51", ip: "192.168.39.95"} in network mk-ha-821265
	I0422 11:09:15.335329   27717 main.go:141] libmachine: (ha-821265-m03) Reserved static IP address: 192.168.39.95
	I0422 11:09:15.335356   27717 main.go:141] libmachine: (ha-821265-m03) DBG | Getting to WaitForSSH function...
	I0422 11:09:15.335365   27717 main.go:141] libmachine: (ha-821265-m03) Waiting for SSH to be available...
	I0422 11:09:15.337802   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:15.338310   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:minikube Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:15.338343   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:15.338536   27717 main.go:141] libmachine: (ha-821265-m03) DBG | Using SSH client type: external
	I0422 11:09:15.338576   27717 main.go:141] libmachine: (ha-821265-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03/id_rsa (-rw-------)
	I0422 11:09:15.338626   27717 main.go:141] libmachine: (ha-821265-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.95 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 11:09:15.338650   27717 main.go:141] libmachine: (ha-821265-m03) DBG | About to run SSH command:
	I0422 11:09:15.338665   27717 main.go:141] libmachine: (ha-821265-m03) DBG | exit 0
	I0422 11:09:15.465225   27717 main.go:141] libmachine: (ha-821265-m03) DBG | SSH cmd err, output: <nil>: 
	I0422 11:09:15.465514   27717 main.go:141] libmachine: (ha-821265-m03) KVM machine creation complete!
	I0422 11:09:15.465854   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetConfigRaw
	I0422 11:09:15.466374   27717 main.go:141] libmachine: (ha-821265-m03) Calling .DriverName
	I0422 11:09:15.466566   27717 main.go:141] libmachine: (ha-821265-m03) Calling .DriverName
	I0422 11:09:15.466768   27717 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0422 11:09:15.466786   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetState
	I0422 11:09:15.468053   27717 main.go:141] libmachine: Detecting operating system of created instance...
	I0422 11:09:15.468067   27717 main.go:141] libmachine: Waiting for SSH to be available...
	I0422 11:09:15.468075   27717 main.go:141] libmachine: Getting to WaitForSSH function...
	I0422 11:09:15.468082   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHHostname
	I0422 11:09:15.470630   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:15.470934   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:15.470957   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:15.471103   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHPort
	I0422 11:09:15.471291   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:09:15.471444   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:09:15.471590   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHUsername
	I0422 11:09:15.471729   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:09:15.471979   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0422 11:09:15.471991   27717 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0422 11:09:15.572509   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 11:09:15.572537   27717 main.go:141] libmachine: Detecting the provisioner...
	I0422 11:09:15.572547   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHHostname
	I0422 11:09:15.575283   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:15.575645   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:15.575675   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:15.575761   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHPort
	I0422 11:09:15.575960   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:09:15.576098   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:09:15.576231   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHUsername
	I0422 11:09:15.576433   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:09:15.576591   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0422 11:09:15.576603   27717 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0422 11:09:15.678450   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0422 11:09:15.678523   27717 main.go:141] libmachine: found compatible host: buildroot
	I0422 11:09:15.678539   27717 main.go:141] libmachine: Provisioning with buildroot...
	I0422 11:09:15.678551   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetMachineName
	I0422 11:09:15.678834   27717 buildroot.go:166] provisioning hostname "ha-821265-m03"
	I0422 11:09:15.678859   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetMachineName
	I0422 11:09:15.679062   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHHostname
	I0422 11:09:15.681822   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:15.682177   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:15.682203   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:15.682384   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHPort
	I0422 11:09:15.682568   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:09:15.682727   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:09:15.682868   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHUsername
	I0422 11:09:15.683046   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:09:15.683194   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0422 11:09:15.683205   27717 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-821265-m03 && echo "ha-821265-m03" | sudo tee /etc/hostname
	I0422 11:09:15.806551   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-821265-m03
	
	I0422 11:09:15.806583   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHHostname
	I0422 11:09:15.809699   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:15.810036   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:15.810065   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:15.810201   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHPort
	I0422 11:09:15.810407   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:09:15.810583   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:09:15.810754   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHUsername
	I0422 11:09:15.811031   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:09:15.811223   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0422 11:09:15.811248   27717 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-821265-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-821265-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-821265-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 11:09:15.924445   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 11:09:15.924469   27717 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18711-7633/.minikube CaCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18711-7633/.minikube}
	I0422 11:09:15.924485   27717 buildroot.go:174] setting up certificates
	I0422 11:09:15.924498   27717 provision.go:84] configureAuth start
	I0422 11:09:15.924511   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetMachineName
	I0422 11:09:15.924793   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetIP
	I0422 11:09:15.927506   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:15.927908   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:15.927936   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:15.928122   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHHostname
	I0422 11:09:15.930093   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:15.930413   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:15.930445   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:15.930577   27717 provision.go:143] copyHostCerts
	I0422 11:09:15.930610   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem
	I0422 11:09:15.930646   27717 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem, removing ...
	I0422 11:09:15.930660   27717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem
	I0422 11:09:15.930739   27717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem (1078 bytes)
	I0422 11:09:15.930810   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem
	I0422 11:09:15.930827   27717 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem, removing ...
	I0422 11:09:15.930832   27717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem
	I0422 11:09:15.930860   27717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem (1123 bytes)
	I0422 11:09:15.930908   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem
	I0422 11:09:15.930923   27717 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem, removing ...
	I0422 11:09:15.930927   27717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem
	I0422 11:09:15.930946   27717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem (1679 bytes)
	I0422 11:09:15.930990   27717 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem org=jenkins.ha-821265-m03 san=[127.0.0.1 192.168.39.95 ha-821265-m03 localhost minikube]
	I0422 11:09:16.024553   27717 provision.go:177] copyRemoteCerts
	I0422 11:09:16.024614   27717 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 11:09:16.024637   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHHostname
	I0422 11:09:16.027483   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.027829   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:16.027853   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.028049   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHPort
	I0422 11:09:16.028237   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:09:16.028411   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHUsername
	I0422 11:09:16.028605   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03/id_rsa Username:docker}
	I0422 11:09:16.112900   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0422 11:09:16.112967   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 11:09:16.143585   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0422 11:09:16.143658   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0422 11:09:16.169632   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0422 11:09:16.169713   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 11:09:16.199360   27717 provision.go:87] duration metric: took 274.848144ms to configureAuth
	I0422 11:09:16.199393   27717 buildroot.go:189] setting minikube options for container-runtime
	I0422 11:09:16.199624   27717 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:09:16.199728   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHHostname
	I0422 11:09:16.202554   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.202901   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:16.202935   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.203218   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHPort
	I0422 11:09:16.203402   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:09:16.203558   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:09:16.203662   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHUsername
	I0422 11:09:16.203823   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:09:16.204094   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0422 11:09:16.204122   27717 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 11:09:16.496129   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 11:09:16.496171   27717 main.go:141] libmachine: Checking connection to Docker...
	I0422 11:09:16.496181   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetURL
	I0422 11:09:16.497655   27717 main.go:141] libmachine: (ha-821265-m03) DBG | Using libvirt version 6000000
	I0422 11:09:16.499978   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.500425   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:16.500456   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.500670   27717 main.go:141] libmachine: Docker is up and running!
	I0422 11:09:16.500684   27717 main.go:141] libmachine: Reticulating splines...
	I0422 11:09:16.500690   27717 client.go:171] duration metric: took 24.827536517s to LocalClient.Create
	I0422 11:09:16.500712   27717 start.go:167] duration metric: took 24.827594634s to libmachine.API.Create "ha-821265"
	I0422 11:09:16.500725   27717 start.go:293] postStartSetup for "ha-821265-m03" (driver="kvm2")
	I0422 11:09:16.500738   27717 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 11:09:16.500760   27717 main.go:141] libmachine: (ha-821265-m03) Calling .DriverName
	I0422 11:09:16.501066   27717 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 11:09:16.501094   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHHostname
	I0422 11:09:16.503847   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.504238   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:16.504279   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.504471   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHPort
	I0422 11:09:16.504698   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:09:16.504899   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHUsername
	I0422 11:09:16.505051   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03/id_rsa Username:docker}
	I0422 11:09:16.589038   27717 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 11:09:16.593840   27717 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 11:09:16.593868   27717 filesync.go:126] Scanning /home/jenkins/minikube-integration/18711-7633/.minikube/addons for local assets ...
	I0422 11:09:16.593932   27717 filesync.go:126] Scanning /home/jenkins/minikube-integration/18711-7633/.minikube/files for local assets ...
	I0422 11:09:16.593999   27717 filesync.go:149] local asset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> 149452.pem in /etc/ssl/certs
	I0422 11:09:16.594008   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> /etc/ssl/certs/149452.pem
	I0422 11:09:16.594086   27717 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 11:09:16.605530   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem --> /etc/ssl/certs/149452.pem (1708 bytes)
	I0422 11:09:16.632347   27717 start.go:296] duration metric: took 131.607684ms for postStartSetup
	I0422 11:09:16.632401   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetConfigRaw
	I0422 11:09:16.632992   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetIP
	I0422 11:09:16.635433   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.635726   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:16.635756   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.635999   27717 profile.go:143] Saving config to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/config.json ...
	I0422 11:09:16.636182   27717 start.go:128] duration metric: took 24.981299957s to createHost
	I0422 11:09:16.636205   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHHostname
	I0422 11:09:16.638145   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.638480   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:16.638507   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.638656   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHPort
	I0422 11:09:16.638818   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:09:16.638955   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:09:16.639046   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHUsername
	I0422 11:09:16.639183   27717 main.go:141] libmachine: Using SSH client type: native
	I0422 11:09:16.639429   27717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0422 11:09:16.639445   27717 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 11:09:16.742163   27717 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713784156.713110697
	
	I0422 11:09:16.742189   27717 fix.go:216] guest clock: 1713784156.713110697
	I0422 11:09:16.742200   27717 fix.go:229] Guest: 2024-04-22 11:09:16.713110697 +0000 UTC Remote: 2024-04-22 11:09:16.636195909 +0000 UTC m=+159.762522555 (delta=76.914788ms)
	I0422 11:09:16.742222   27717 fix.go:200] guest clock delta is within tolerance: 76.914788ms
	I0422 11:09:16.742230   27717 start.go:83] releasing machines lock for "ha-821265-m03", held for 25.087482422s
	I0422 11:09:16.742258   27717 main.go:141] libmachine: (ha-821265-m03) Calling .DriverName
	I0422 11:09:16.742561   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetIP
	I0422 11:09:16.745430   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.745764   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:16.745794   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.748266   27717 out.go:177] * Found network options:
	I0422 11:09:16.749634   27717 out.go:177]   - NO_PROXY=192.168.39.150,192.168.39.39
	W0422 11:09:16.750980   27717 proxy.go:119] fail to check proxy env: Error ip not in block
	W0422 11:09:16.751009   27717 proxy.go:119] fail to check proxy env: Error ip not in block
	I0422 11:09:16.751029   27717 main.go:141] libmachine: (ha-821265-m03) Calling .DriverName
	I0422 11:09:16.751641   27717 main.go:141] libmachine: (ha-821265-m03) Calling .DriverName
	I0422 11:09:16.751874   27717 main.go:141] libmachine: (ha-821265-m03) Calling .DriverName
	I0422 11:09:16.751979   27717 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 11:09:16.752018   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHHostname
	W0422 11:09:16.752104   27717 proxy.go:119] fail to check proxy env: Error ip not in block
	W0422 11:09:16.752131   27717 proxy.go:119] fail to check proxy env: Error ip not in block
	I0422 11:09:16.752212   27717 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 11:09:16.752236   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHHostname
	I0422 11:09:16.754931   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.755141   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.755354   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:16.755388   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.755520   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:16.755547   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:16.755793   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHPort
	I0422 11:09:16.755898   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHPort
	I0422 11:09:16.755971   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:09:16.756069   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:09:16.756130   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHUsername
	I0422 11:09:16.756188   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHUsername
	I0422 11:09:16.756370   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03/id_rsa Username:docker}
	I0422 11:09:16.756383   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03/id_rsa Username:docker}
	I0422 11:09:16.995055   27717 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 11:09:17.003159   27717 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 11:09:17.003265   27717 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 11:09:17.022246   27717 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 11:09:17.022274   27717 start.go:494] detecting cgroup driver to use...
	I0422 11:09:17.022344   27717 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 11:09:17.039766   27717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 11:09:17.055183   27717 docker.go:217] disabling cri-docker service (if available) ...
	I0422 11:09:17.055249   27717 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 11:09:17.071071   27717 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 11:09:17.086203   27717 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 11:09:17.212333   27717 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 11:09:17.401337   27717 docker.go:233] disabling docker service ...
	I0422 11:09:17.401418   27717 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 11:09:17.421314   27717 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 11:09:17.438204   27717 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 11:09:17.565481   27717 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 11:09:17.701482   27717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 11:09:17.719346   27717 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 11:09:17.742002   27717 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 11:09:17.742069   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:09:17.754885   27717 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 11:09:17.754944   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:09:17.769590   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:09:17.784142   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:09:17.796657   27717 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 11:09:17.811165   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:09:17.827414   27717 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:09:17.849119   27717 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:09:17.862638   27717 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 11:09:17.874610   27717 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 11:09:17.874676   27717 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 11:09:17.891831   27717 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 11:09:17.904059   27717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 11:09:18.028167   27717 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 11:09:18.190198   27717 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 11:09:18.190273   27717 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 11:09:18.196461   27717 start.go:562] Will wait 60s for crictl version
	I0422 11:09:18.196533   27717 ssh_runner.go:195] Run: which crictl
	I0422 11:09:18.200973   27717 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 11:09:18.241976   27717 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 11:09:18.242058   27717 ssh_runner.go:195] Run: crio --version
	I0422 11:09:18.276722   27717 ssh_runner.go:195] Run: crio --version
	I0422 11:09:18.312736   27717 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 11:09:18.314312   27717 out.go:177]   - env NO_PROXY=192.168.39.150
	I0422 11:09:18.315777   27717 out.go:177]   - env NO_PROXY=192.168.39.150,192.168.39.39
	I0422 11:09:18.317079   27717 main.go:141] libmachine: (ha-821265-m03) Calling .GetIP
	I0422 11:09:18.319814   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:18.320279   27717 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:09:18.320306   27717 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:09:18.320528   27717 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0422 11:09:18.325438   27717 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 11:09:18.340224   27717 mustload.go:65] Loading cluster: ha-821265
	I0422 11:09:18.340479   27717 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:09:18.340720   27717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:09:18.340792   27717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:09:18.355733   27717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40399
	I0422 11:09:18.356170   27717 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:09:18.356643   27717 main.go:141] libmachine: Using API Version  1
	I0422 11:09:18.356659   27717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:09:18.356957   27717 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:09:18.357205   27717 main.go:141] libmachine: (ha-821265) Calling .GetState
	I0422 11:09:18.359041   27717 host.go:66] Checking if "ha-821265" exists ...
	I0422 11:09:18.359383   27717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:09:18.359422   27717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:09:18.374945   27717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43221
	I0422 11:09:18.375355   27717 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:09:18.375881   27717 main.go:141] libmachine: Using API Version  1
	I0422 11:09:18.375907   27717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:09:18.376247   27717 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:09:18.376465   27717 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:09:18.376621   27717 certs.go:68] Setting up /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265 for IP: 192.168.39.95
	I0422 11:09:18.376642   27717 certs.go:194] generating shared ca certs ...
	I0422 11:09:18.376662   27717 certs.go:226] acquiring lock for ca certs: {Name:mk0b77082b88c771d0b00be5267ca31dfee6f85a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:09:18.376828   27717 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key
	I0422 11:09:18.376887   27717 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key
	I0422 11:09:18.376899   27717 certs.go:256] generating profile certs ...
	I0422 11:09:18.376967   27717 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/client.key
	I0422 11:09:18.376994   27717 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key.31f49dce
	I0422 11:09:18.377008   27717 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt.31f49dce with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.150 192.168.39.39 192.168.39.95 192.168.39.254]
	I0422 11:09:18.586174   27717 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt.31f49dce ...
	I0422 11:09:18.586202   27717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt.31f49dce: {Name:mk0abe473282f1560348550eacbe3ea6fdc28112 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:09:18.586359   27717 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key.31f49dce ...
	I0422 11:09:18.586372   27717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key.31f49dce: {Name:mka3b0906da84245b52f3e9ec6c525d09b33b6e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:09:18.586445   27717 certs.go:381] copying /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt.31f49dce -> /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt
	I0422 11:09:18.586567   27717 certs.go:385] copying /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key.31f49dce -> /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key
	I0422 11:09:18.586683   27717 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.key
	I0422 11:09:18.586698   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0422 11:09:18.586710   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0422 11:09:18.586723   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0422 11:09:18.586736   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0422 11:09:18.586748   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0422 11:09:18.586760   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0422 11:09:18.586772   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0422 11:09:18.586784   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0422 11:09:18.586843   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem (1338 bytes)
	W0422 11:09:18.586868   27717 certs.go:480] ignoring /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945_empty.pem, impossibly tiny 0 bytes
	I0422 11:09:18.586877   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem (1679 bytes)
	I0422 11:09:18.586898   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem (1078 bytes)
	I0422 11:09:18.586918   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem (1123 bytes)
	I0422 11:09:18.586937   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem (1679 bytes)
	I0422 11:09:18.586971   27717 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem (1708 bytes)
	I0422 11:09:18.586995   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:09:18.587012   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem -> /usr/share/ca-certificates/14945.pem
	I0422 11:09:18.587023   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> /usr/share/ca-certificates/149452.pem
	I0422 11:09:18.587052   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:09:18.589945   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:09:18.590343   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:09:18.590371   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:09:18.590552   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:09:18.590726   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:09:18.590870   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:09:18.591045   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:09:18.665233   27717 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0422 11:09:18.671728   27717 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0422 11:09:18.685400   27717 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0422 11:09:18.690495   27717 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0422 11:09:18.704853   27717 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0422 11:09:18.710158   27717 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0422 11:09:18.723765   27717 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0422 11:09:18.729225   27717 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0422 11:09:18.743212   27717 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0422 11:09:18.748424   27717 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0422 11:09:18.763219   27717 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0422 11:09:18.770678   27717 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0422 11:09:18.786190   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 11:09:18.819799   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 11:09:18.852375   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 11:09:18.882081   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0422 11:09:18.911140   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0422 11:09:18.939772   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0422 11:09:18.968018   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 11:09:18.997132   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 11:09:19.026284   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 11:09:19.054169   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem --> /usr/share/ca-certificates/14945.pem (1338 bytes)
	I0422 11:09:19.086736   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem --> /usr/share/ca-certificates/149452.pem (1708 bytes)
	I0422 11:09:19.118084   27717 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0422 11:09:19.139533   27717 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0422 11:09:19.159259   27717 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0422 11:09:19.178452   27717 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0422 11:09:19.199458   27717 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0422 11:09:19.221696   27717 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0422 11:09:19.241488   27717 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0422 11:09:19.261496   27717 ssh_runner.go:195] Run: openssl version
	I0422 11:09:19.268162   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14945.pem && ln -fs /usr/share/ca-certificates/14945.pem /etc/ssl/certs/14945.pem"
	I0422 11:09:19.280231   27717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14945.pem
	I0422 11:09:19.285564   27717 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 10:51 /usr/share/ca-certificates/14945.pem
	I0422 11:09:19.285614   27717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14945.pem
	I0422 11:09:19.292158   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14945.pem /etc/ssl/certs/51391683.0"
	I0422 11:09:19.304177   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149452.pem && ln -fs /usr/share/ca-certificates/149452.pem /etc/ssl/certs/149452.pem"
	I0422 11:09:19.317794   27717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149452.pem
	I0422 11:09:19.323152   27717 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 10:51 /usr/share/ca-certificates/149452.pem
	I0422 11:09:19.323208   27717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149452.pem
	I0422 11:09:19.330146   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149452.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 11:09:19.342884   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 11:09:19.356685   27717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:09:19.362191   27717 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:09:19.362241   27717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:09:19.368802   27717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 11:09:19.381512   27717 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 11:09:19.386404   27717 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0422 11:09:19.386454   27717 kubeadm.go:928] updating node {m03 192.168.39.95 8443 v1.30.0 crio true true} ...
	I0422 11:09:19.386529   27717 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-821265-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-821265 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 11:09:19.386550   27717 kube-vip.go:111] generating kube-vip config ...
	I0422 11:09:19.386599   27717 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0422 11:09:19.406645   27717 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0422 11:09:19.406727   27717 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0422 11:09:19.406814   27717 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 11:09:19.419228   27717 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0422 11:09:19.419300   27717 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0422 11:09:19.431887   27717 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0422 11:09:19.431916   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0422 11:09:19.431916   27717 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0422 11:09:19.431935   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0422 11:09:19.431887   27717 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0422 11:09:19.431991   27717 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0422 11:09:19.431992   27717 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0422 11:09:19.432012   27717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:09:19.448974   27717 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0422 11:09:19.449015   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0422 11:09:19.449027   27717 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0422 11:09:19.449059   27717 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0422 11:09:19.449080   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0422 11:09:19.449116   27717 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0422 11:09:19.486783   27717 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0422 11:09:19.486824   27717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0422 11:09:20.595730   27717 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0422 11:09:20.606456   27717 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0422 11:09:20.626239   27717 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 11:09:20.645685   27717 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0422 11:09:20.665373   27717 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0422 11:09:20.670422   27717 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 11:09:20.685691   27717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 11:09:20.825824   27717 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 11:09:20.845457   27717 host.go:66] Checking if "ha-821265" exists ...
	I0422 11:09:20.845784   27717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:09:20.845821   27717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:09:20.861313   27717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44447
	I0422 11:09:20.862223   27717 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:09:20.862765   27717 main.go:141] libmachine: Using API Version  1
	I0422 11:09:20.862789   27717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:09:20.863111   27717 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:09:20.863326   27717 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:09:20.863507   27717 start.go:316] joinCluster: &{Name:ha-821265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-821265 Namespace:defau
lt APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:fal
se istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 11:09:20.863617   27717 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0422 11:09:20.863636   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:09:20.867189   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:09:20.867773   27717 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:09:20.867802   27717 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:09:20.868010   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:09:20.868195   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:09:20.868409   27717 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:09:20.868571   27717 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:09:21.044309   27717 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 11:09:21.044362   27717 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8cpnhy.fsuqlvdl5mdoaw2l --discovery-token-ca-cert-hash sha256:9af0990320e7d045d6d22d7e7276e7343f7b0ca44ae792f4916b4a79d803646f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-821265-m03 --control-plane --apiserver-advertise-address=192.168.39.95 --apiserver-bind-port=8443"
	I0422 11:09:45.816130   27717 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 8cpnhy.fsuqlvdl5mdoaw2l --discovery-token-ca-cert-hash sha256:9af0990320e7d045d6d22d7e7276e7343f7b0ca44ae792f4916b4a79d803646f --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-821265-m03 --control-plane --apiserver-advertise-address=192.168.39.95 --apiserver-bind-port=8443": (24.7717447s)
	I0422 11:09:45.816167   27717 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0422 11:09:46.534130   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-821265-m03 minikube.k8s.io/updated_at=2024_04_22T11_09_46_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437 minikube.k8s.io/name=ha-821265 minikube.k8s.io/primary=false
	I0422 11:09:46.650185   27717 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-821265-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0422 11:09:46.782667   27717 start.go:318] duration metric: took 25.91915592s to joinCluster
	I0422 11:09:46.782754   27717 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 11:09:46.784737   27717 out.go:177] * Verifying Kubernetes components...
	I0422 11:09:46.783108   27717 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:09:46.786691   27717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 11:09:47.065586   27717 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 11:09:47.126472   27717 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18711-7633/kubeconfig
	I0422 11:09:47.126805   27717 kapi.go:59] client config for ha-821265: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/client.crt", KeyFile:"/home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/client.key", CAFile:"/home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0422 11:09:47.126904   27717 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.150:8443
	I0422 11:09:47.127221   27717 node_ready.go:35] waiting up to 6m0s for node "ha-821265-m03" to be "Ready" ...
	I0422 11:09:47.127305   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:47.127316   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:47.127326   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:47.127335   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:47.139056   27717 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0422 11:09:47.628225   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:47.628244   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:47.628252   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:47.628256   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:47.632295   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:09:48.128444   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:48.128473   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:48.128486   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:48.128493   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:48.132396   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:48.627458   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:48.627483   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:48.627495   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:48.627500   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:48.631537   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:09:49.128100   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:49.128123   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:49.128131   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:49.128135   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:49.132070   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:49.132722   27717 node_ready.go:53] node "ha-821265-m03" has status "Ready":"False"
	I0422 11:09:49.627810   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:49.627836   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:49.627846   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:49.627851   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:49.631389   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:50.127408   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:50.127555   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:50.127579   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:50.127588   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:50.131608   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:50.627496   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:50.627518   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:50.627526   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:50.627530   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:50.633238   27717 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0422 11:09:51.127897   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:51.127925   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:51.127936   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:51.127942   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:51.131758   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:51.627972   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:51.627992   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:51.627999   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:51.628003   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:51.631053   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:51.631812   27717 node_ready.go:53] node "ha-821265-m03" has status "Ready":"False"
	I0422 11:09:52.128060   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:52.128080   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:52.128088   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:52.128091   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:52.132044   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:52.628377   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:52.628400   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:52.628408   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:52.628412   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:52.632264   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:53.128388   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:53.128407   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:53.128416   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:53.128421   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:53.133077   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:09:53.628230   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:53.628254   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:53.628264   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:53.628269   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:53.631791   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:53.633300   27717 node_ready.go:53] node "ha-821265-m03" has status "Ready":"False"
	I0422 11:09:54.128058   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:54.128086   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:54.128094   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:54.128099   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:54.131943   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:54.627964   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:54.627985   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:54.627994   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:54.627998   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:54.631842   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:55.127908   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:55.127929   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:55.127936   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:55.127939   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:55.132024   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:09:55.133067   27717 node_ready.go:49] node "ha-821265-m03" has status "Ready":"True"
	I0422 11:09:55.133091   27717 node_ready.go:38] duration metric: took 8.005847302s for node "ha-821265-m03" to be "Ready" ...
	I0422 11:09:55.133102   27717 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 11:09:55.133179   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0422 11:09:55.133192   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:55.133203   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:55.133224   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:55.141303   27717 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0422 11:09:55.148059   27717 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ft2jl" in "kube-system" namespace to be "Ready" ...
	I0422 11:09:55.148131   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ft2jl
	I0422 11:09:55.148143   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:55.148150   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:55.148154   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:55.151339   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:55.152110   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:09:55.152125   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:55.152132   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:55.152135   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:55.155067   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:09:55.155832   27717 pod_ready.go:92] pod "coredns-7db6d8ff4d-ft2jl" in "kube-system" namespace has status "Ready":"True"
	I0422 11:09:55.155847   27717 pod_ready.go:81] duration metric: took 7.763906ms for pod "coredns-7db6d8ff4d-ft2jl" in "kube-system" namespace to be "Ready" ...
	I0422 11:09:55.155855   27717 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ht7jl" in "kube-system" namespace to be "Ready" ...
	I0422 11:09:55.155897   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ht7jl
	I0422 11:09:55.155907   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:55.155914   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:55.155917   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:55.158817   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:09:55.159579   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:09:55.159591   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:55.159597   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:55.159601   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:55.162388   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:09:55.162982   27717 pod_ready.go:92] pod "coredns-7db6d8ff4d-ht7jl" in "kube-system" namespace has status "Ready":"True"
	I0422 11:09:55.163003   27717 pod_ready.go:81] duration metric: took 7.140664ms for pod "coredns-7db6d8ff4d-ht7jl" in "kube-system" namespace to be "Ready" ...
	I0422 11:09:55.163015   27717 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-821265" in "kube-system" namespace to be "Ready" ...
	I0422 11:09:55.163078   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265
	I0422 11:09:55.163089   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:55.163096   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:55.163101   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:55.166984   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:55.167590   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:09:55.167603   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:55.167616   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:55.167621   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:55.170261   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:09:55.170809   27717 pod_ready.go:92] pod "etcd-ha-821265" in "kube-system" namespace has status "Ready":"True"
	I0422 11:09:55.170824   27717 pod_ready.go:81] duration metric: took 7.801021ms for pod "etcd-ha-821265" in "kube-system" namespace to be "Ready" ...
	I0422 11:09:55.170831   27717 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-821265-m02" in "kube-system" namespace to be "Ready" ...
	I0422 11:09:55.170881   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m02
	I0422 11:09:55.170890   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:55.170897   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:55.170900   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:55.173959   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:55.174967   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:09:55.175012   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:55.175031   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:55.175039   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:55.179210   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:09:55.180554   27717 pod_ready.go:92] pod "etcd-ha-821265-m02" in "kube-system" namespace has status "Ready":"True"
	I0422 11:09:55.180569   27717 pod_ready.go:81] duration metric: took 9.73166ms for pod "etcd-ha-821265-m02" in "kube-system" namespace to be "Ready" ...
	I0422 11:09:55.180577   27717 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-821265-m03" in "kube-system" namespace to be "Ready" ...
	I0422 11:09:55.328927   27717 request.go:629] Waited for 148.302005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m03
	I0422 11:09:55.328983   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m03
	I0422 11:09:55.328988   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:55.328996   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:55.329002   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:55.332451   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:55.528632   27717 request.go:629] Waited for 195.409677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:55.528695   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:55.528707   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:55.528718   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:55.528726   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:55.535731   27717 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0422 11:09:55.728896   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m03
	I0422 11:09:55.728919   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:55.728928   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:55.728934   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:55.732722   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:55.928396   27717 request.go:629] Waited for 194.410291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:55.928472   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:55.928479   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:55.928490   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:55.928503   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:55.931979   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:56.181777   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m03
	I0422 11:09:56.181799   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:56.181830   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:56.181839   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:56.185731   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:56.327972   27717 request.go:629] Waited for 141.220617ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:56.328022   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:56.328028   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:56.328035   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:56.328042   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:56.332996   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:09:56.681455   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m03
	I0422 11:09:56.681479   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:56.681487   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:56.681491   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:56.685257   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:56.728520   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:56.728549   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:56.728561   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:56.728569   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:56.745726   27717 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0422 11:09:57.181343   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m03
	I0422 11:09:57.181366   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:57.181374   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:57.181378   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:57.184555   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:57.185663   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:57.185682   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:57.185688   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:57.185692   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:57.188717   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:57.189405   27717 pod_ready.go:102] pod "etcd-ha-821265-m03" in "kube-system" namespace has status "Ready":"False"
	I0422 11:09:57.681048   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m03
	I0422 11:09:57.681074   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:57.681085   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:57.681097   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:57.684566   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:57.685854   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:57.685870   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:57.685877   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:57.685882   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:57.689062   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:58.180918   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m03
	I0422 11:09:58.180952   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:58.180963   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:58.180970   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:58.183926   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:09:58.184547   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:58.184561   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:58.184572   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:58.184581   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:58.187509   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:09:58.680994   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m03
	I0422 11:09:58.681017   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:58.681024   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:58.681029   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:58.685012   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:58.685695   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:58.685714   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:58.685725   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:58.685730   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:58.688908   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:59.180741   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/etcd-ha-821265-m03
	I0422 11:09:59.180762   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:59.180787   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:59.180792   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:59.184823   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:09:59.185665   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:59.185685   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:59.185696   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:59.185701   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:59.188861   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:59.189395   27717 pod_ready.go:92] pod "etcd-ha-821265-m03" in "kube-system" namespace has status "Ready":"True"
	I0422 11:09:59.189409   27717 pod_ready.go:81] duration metric: took 4.008826567s for pod "etcd-ha-821265-m03" in "kube-system" namespace to be "Ready" ...
	I0422 11:09:59.189427   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-821265" in "kube-system" namespace to be "Ready" ...
	I0422 11:09:59.189478   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-821265
	I0422 11:09:59.189487   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:59.189494   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:59.189497   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:59.192943   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:59.193737   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:09:59.193754   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:59.193765   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:59.193775   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:59.196429   27717 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 11:09:59.197417   27717 pod_ready.go:92] pod "kube-apiserver-ha-821265" in "kube-system" namespace has status "Ready":"True"
	I0422 11:09:59.197432   27717 pod_ready.go:81] duration metric: took 7.996435ms for pod "kube-apiserver-ha-821265" in "kube-system" namespace to be "Ready" ...
	I0422 11:09:59.197440   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-821265-m02" in "kube-system" namespace to be "Ready" ...
	I0422 11:09:59.328829   27717 request.go:629] Waited for 131.304831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-821265-m02
	I0422 11:09:59.328882   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-821265-m02
	I0422 11:09:59.328887   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:59.328894   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:59.328899   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:59.332794   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:59.528022   27717 request.go:629] Waited for 194.201432ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:09:59.528098   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:09:59.528106   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:59.528115   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:59.528125   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:59.532398   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:09:59.533198   27717 pod_ready.go:92] pod "kube-apiserver-ha-821265-m02" in "kube-system" namespace has status "Ready":"True"
	I0422 11:09:59.533216   27717 pod_ready.go:81] duration metric: took 335.771232ms for pod "kube-apiserver-ha-821265-m02" in "kube-system" namespace to be "Ready" ...
	I0422 11:09:59.533225   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-821265-m03" in "kube-system" namespace to be "Ready" ...
	I0422 11:09:59.728352   27717 request.go:629] Waited for 195.060151ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-821265-m03
	I0422 11:09:59.728408   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-821265-m03
	I0422 11:09:59.728413   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:59.728420   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:59.728425   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:59.732114   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:59.928310   27717 request.go:629] Waited for 195.371996ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:59.928363   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:09:59.928368   27717 round_trippers.go:469] Request Headers:
	I0422 11:09:59.928375   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:09:59.928381   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:09:59.931934   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:09:59.932490   27717 pod_ready.go:92] pod "kube-apiserver-ha-821265-m03" in "kube-system" namespace has status "Ready":"True"
	I0422 11:09:59.932510   27717 pod_ready.go:81] duration metric: took 399.279596ms for pod "kube-apiserver-ha-821265-m03" in "kube-system" namespace to be "Ready" ...
	I0422 11:09:59.932520   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-821265" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:00.128683   27717 request.go:629] Waited for 196.072405ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-821265
	I0422 11:10:00.128749   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-821265
	I0422 11:10:00.128756   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:00.128768   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:00.128793   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:00.134879   27717 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0422 11:10:00.328214   27717 request.go:629] Waited for 191.35653ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:10:00.328265   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:10:00.328270   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:00.328277   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:00.328281   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:00.332026   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:00.332684   27717 pod_ready.go:92] pod "kube-controller-manager-ha-821265" in "kube-system" namespace has status "Ready":"True"
	I0422 11:10:00.332701   27717 pod_ready.go:81] duration metric: took 400.174492ms for pod "kube-controller-manager-ha-821265" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:00.332713   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-821265-m02" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:00.528856   27717 request.go:629] Waited for 196.071774ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-821265-m02
	I0422 11:10:00.528928   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-821265-m02
	I0422 11:10:00.528933   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:00.528940   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:00.528945   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:00.533521   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:10:00.728986   27717 request.go:629] Waited for 194.304056ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:10:00.729068   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:10:00.729076   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:00.729087   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:00.729094   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:00.732973   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:00.733573   27717 pod_ready.go:92] pod "kube-controller-manager-ha-821265-m02" in "kube-system" namespace has status "Ready":"True"
	I0422 11:10:00.733594   27717 pod_ready.go:81] duration metric: took 400.873731ms for pod "kube-controller-manager-ha-821265-m02" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:00.733603   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-821265-m03" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:00.928318   27717 request.go:629] Waited for 194.651614ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-821265-m03
	I0422 11:10:00.928378   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-821265-m03
	I0422 11:10:00.928383   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:00.928390   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:00.928395   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:00.932259   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:01.128530   27717 request.go:629] Waited for 195.398787ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:10:01.128618   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:10:01.128629   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:01.128639   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:01.128651   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:01.132230   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:01.328424   27717 request.go:629] Waited for 94.271832ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-821265-m03
	I0422 11:10:01.328528   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-821265-m03
	I0422 11:10:01.328549   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:01.328561   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:01.328572   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:01.332158   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:01.528495   27717 request.go:629] Waited for 195.503425ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:10:01.528548   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:10:01.528554   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:01.528565   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:01.528571   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:01.538438   27717 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0422 11:10:01.734068   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-821265-m03
	I0422 11:10:01.734092   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:01.734104   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:01.734109   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:01.738513   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:10:01.928571   27717 request.go:629] Waited for 189.028209ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:10:01.928645   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:10:01.928672   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:01.928680   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:01.928687   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:01.932382   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:02.234774   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-821265-m03
	I0422 11:10:02.234802   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:02.234814   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:02.234821   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:02.240337   27717 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0422 11:10:02.328656   27717 request.go:629] Waited for 87.297571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:10:02.328727   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:10:02.328734   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:02.328758   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:02.328788   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:02.332414   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:02.734739   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-821265-m03
	I0422 11:10:02.734760   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:02.734774   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:02.734787   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:02.738826   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:10:02.739835   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:10:02.739853   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:02.739860   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:02.739863   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:02.743047   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:02.743632   27717 pod_ready.go:102] pod "kube-controller-manager-ha-821265-m03" in "kube-system" namespace has status "Ready":"False"
	I0422 11:10:03.233882   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-821265-m03
	I0422 11:10:03.233910   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:03.233919   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:03.233923   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:03.238095   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:10:03.239086   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:10:03.239107   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:03.239119   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:03.239124   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:03.242697   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:03.734022   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-821265-m03
	I0422 11:10:03.734043   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:03.734048   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:03.734052   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:03.738415   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:10:03.739174   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:10:03.739188   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:03.739195   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:03.739200   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:03.742528   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:04.234024   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-821265-m03
	I0422 11:10:04.234045   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:04.234053   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:04.234058   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:04.238064   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:04.238672   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:10:04.238692   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:04.238701   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:04.238708   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:04.241801   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:04.242490   27717 pod_ready.go:92] pod "kube-controller-manager-ha-821265-m03" in "kube-system" namespace has status "Ready":"True"
	I0422 11:10:04.242517   27717 pod_ready.go:81] duration metric: took 3.508907065s for pod "kube-controller-manager-ha-821265-m03" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:04.242530   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j2hpk" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:04.242597   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j2hpk
	I0422 11:10:04.242609   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:04.242618   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:04.242623   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:04.245689   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:04.328738   27717 request.go:629] Waited for 82.253896ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:10:04.328861   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:10:04.328872   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:04.328879   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:04.328884   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:04.332660   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:04.333455   27717 pod_ready.go:92] pod "kube-proxy-j2hpk" in "kube-system" namespace has status "Ready":"True"
	I0422 11:10:04.333477   27717 pod_ready.go:81] duration metric: took 90.940541ms for pod "kube-proxy-j2hpk" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:04.333486   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lmhp7" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:04.528909   27717 request.go:629] Waited for 195.350003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lmhp7
	I0422 11:10:04.528960   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lmhp7
	I0422 11:10:04.528965   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:04.528972   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:04.528977   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:04.533521   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:10:04.728595   27717 request.go:629] Waited for 194.421308ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:10:04.728664   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:10:04.728672   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:04.728683   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:04.728688   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:04.732667   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:04.733609   27717 pod_ready.go:92] pod "kube-proxy-lmhp7" in "kube-system" namespace has status "Ready":"True"
	I0422 11:10:04.733631   27717 pod_ready.go:81] duration metric: took 400.138637ms for pod "kube-proxy-lmhp7" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:04.733641   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w7r9d" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:04.928836   27717 request.go:629] Waited for 195.095072ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w7r9d
	I0422 11:10:04.928909   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w7r9d
	I0422 11:10:04.928920   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:04.928935   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:04.928943   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:04.933134   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:10:05.128343   27717 request.go:629] Waited for 194.398682ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:10:05.128436   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:10:05.128443   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:05.128450   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:05.128457   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:05.132814   27717 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 11:10:05.133630   27717 pod_ready.go:92] pod "kube-proxy-w7r9d" in "kube-system" namespace has status "Ready":"True"
	I0422 11:10:05.133649   27717 pod_ready.go:81] duration metric: took 400.001653ms for pod "kube-proxy-w7r9d" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:05.133658   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-821265" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:05.328876   27717 request.go:629] Waited for 195.125957ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-821265
	I0422 11:10:05.328943   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-821265
	I0422 11:10:05.328951   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:05.328962   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:05.328971   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:05.332942   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:05.528399   27717 request.go:629] Waited for 194.35396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:10:05.528491   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265
	I0422 11:10:05.528502   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:05.528509   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:05.528515   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:05.532124   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:05.532922   27717 pod_ready.go:92] pod "kube-scheduler-ha-821265" in "kube-system" namespace has status "Ready":"True"
	I0422 11:10:05.532944   27717 pod_ready.go:81] duration metric: took 399.278603ms for pod "kube-scheduler-ha-821265" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:05.532956   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-821265-m02" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:05.727969   27717 request.go:629] Waited for 194.954055ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-821265-m02
	I0422 11:10:05.728066   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-821265-m02
	I0422 11:10:05.728078   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:05.728089   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:05.728100   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:05.731528   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:05.928856   27717 request.go:629] Waited for 196.426732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:10:05.928913   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m02
	I0422 11:10:05.928918   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:05.928925   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:05.928929   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:05.932832   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:05.933393   27717 pod_ready.go:92] pod "kube-scheduler-ha-821265-m02" in "kube-system" namespace has status "Ready":"True"
	I0422 11:10:05.933410   27717 pod_ready.go:81] duration metric: took 400.447952ms for pod "kube-scheduler-ha-821265-m02" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:05.933419   27717 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-821265-m03" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:06.128597   27717 request.go:629] Waited for 195.116076ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-821265-m03
	I0422 11:10:06.128669   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-821265-m03
	I0422 11:10:06.128674   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:06.128681   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:06.128689   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:06.134971   27717 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0422 11:10:06.328097   27717 request.go:629] Waited for 192.2814ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:10:06.328160   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes/ha-821265-m03
	I0422 11:10:06.328165   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:06.328173   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:06.328178   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:06.331467   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:06.332150   27717 pod_ready.go:92] pod "kube-scheduler-ha-821265-m03" in "kube-system" namespace has status "Ready":"True"
	I0422 11:10:06.332169   27717 pod_ready.go:81] duration metric: took 398.74421ms for pod "kube-scheduler-ha-821265-m03" in "kube-system" namespace to be "Ready" ...
	I0422 11:10:06.332181   27717 pod_ready.go:38] duration metric: took 11.199068135s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 11:10:06.332193   27717 api_server.go:52] waiting for apiserver process to appear ...
	I0422 11:10:06.332242   27717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 11:10:06.350635   27717 api_server.go:72] duration metric: took 19.567842113s to wait for apiserver process to appear ...
	I0422 11:10:06.350664   27717 api_server.go:88] waiting for apiserver healthz status ...
	I0422 11:10:06.350685   27717 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I0422 11:10:06.356504   27717 api_server.go:279] https://192.168.39.150:8443/healthz returned 200:
	ok
	I0422 11:10:06.356574   27717 round_trippers.go:463] GET https://192.168.39.150:8443/version
	I0422 11:10:06.356583   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:06.356591   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:06.356600   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:06.357536   27717 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0422 11:10:06.357607   27717 api_server.go:141] control plane version: v1.30.0
	I0422 11:10:06.357625   27717 api_server.go:131] duration metric: took 6.954129ms to wait for apiserver health ...
	I0422 11:10:06.357637   27717 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 11:10:06.528362   27717 request.go:629] Waited for 170.649697ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0422 11:10:06.528425   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0422 11:10:06.528432   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:06.528442   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:06.528453   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:06.556565   27717 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0422 11:10:06.566367   27717 system_pods.go:59] 24 kube-system pods found
	I0422 11:10:06.566402   27717 system_pods.go:61] "coredns-7db6d8ff4d-ft2jl" [09e14815-b8e9-4b60-9b2c-c7d86cccb594] Running
	I0422 11:10:06.566408   27717 system_pods.go:61] "coredns-7db6d8ff4d-ht7jl" [c404a830-ddce-4c49-9e54-05d45871b4b0] Running
	I0422 11:10:06.566412   27717 system_pods.go:61] "etcd-ha-821265" [1a27ab5d-19af-49d9-8eb3-e50b7e2225a5] Running
	I0422 11:10:06.566417   27717 system_pods.go:61] "etcd-ha-821265-m02" [4ba0de26-81d6-423b-a5a4-9fd88c90ebdc] Running
	I0422 11:10:06.566422   27717 system_pods.go:61] "etcd-ha-821265-m03" [43ef0886-3651-4313-847d-ee6cd15ec411] Running
	I0422 11:10:06.566427   27717 system_pods.go:61] "kindnet-d8qgr" [ec965a08-bffa-46ef-8edf-a3f29cb9b474] Running
	I0422 11:10:06.566431   27717 system_pods.go:61] "kindnet-jm2pd" [0550a9db-b106-4ac4-9976-118d80927509] Running
	I0422 11:10:06.566435   27717 system_pods.go:61] "kindnet-qbq9z" [9751a17f-e26b-4ba8-81ce-077103c0aa1c] Running
	I0422 11:10:06.566440   27717 system_pods.go:61] "kube-apiserver-ha-821265" [1e20fb49-c54d-49fd-900b-38e347a52f9a] Running
	I0422 11:10:06.566445   27717 system_pods.go:61] "kube-apiserver-ha-821265-m02" [95616042-7a05-4fc3-a1ef-7fd56c8b3cd8] Running
	I0422 11:10:06.566450   27717 system_pods.go:61] "kube-apiserver-ha-821265-m03" [d2cd8a48-ff79-48cd-9096-99c240d07879] Running
	I0422 11:10:06.566455   27717 system_pods.go:61] "kube-controller-manager-ha-821265" [51933fc1-af7c-4fb0-b811-b6312f4b4d29] Running
	I0422 11:10:06.566460   27717 system_pods.go:61] "kube-controller-manager-ha-821265-m02" [4af2c432-4c7c-4f1f-98da-34af2648d7db] Running
	I0422 11:10:06.566465   27717 system_pods.go:61] "kube-controller-manager-ha-821265-m03" [06ea7b1f-409d-43a6-9493-bc4c24f3f536] Running
	I0422 11:10:06.566471   27717 system_pods.go:61] "kube-proxy-j2hpk" [3ebf4ab0-bc76-4f5c-916e-6b28a81dc031] Running
	I0422 11:10:06.566478   27717 system_pods.go:61] "kube-proxy-lmhp7" [45383871-e744-4764-823a-060a498ebc51] Running
	I0422 11:10:06.566483   27717 system_pods.go:61] "kube-proxy-w7r9d" [56a4f7fc-5ce0-4d77-b30f-9d39cded457c] Running
	I0422 11:10:06.566488   27717 system_pods.go:61] "kube-scheduler-ha-821265" [929e0c00-c49a-4b96-8f6a-7a84ae4f117c] Running
	I0422 11:10:06.566499   27717 system_pods.go:61] "kube-scheduler-ha-821265-m02" [589c30c7-d9df-4745-bdb3-87ae02ab2b67] Running
	I0422 11:10:06.566504   27717 system_pods.go:61] "kube-scheduler-ha-821265-m03" [d57674c8-cc46-4da5-9be1-01675f656b35] Running
	I0422 11:10:06.566511   27717 system_pods.go:61] "kube-vip-ha-821265" [9322f0ee-9e3e-4585-9388-44ccd1417371] Running
	I0422 11:10:06.566516   27717 system_pods.go:61] "kube-vip-ha-821265-m02" [466697de-7dbe-4e6c-be95-9463a9548cde] Running
	I0422 11:10:06.566524   27717 system_pods.go:61] "kube-vip-ha-821265-m03" [a4b446ae-5369-4b1e-bd82-be6fb4110c4c] Running
	I0422 11:10:06.566528   27717 system_pods.go:61] "storage-provisioner" [4b44da93-f3fa-49b7-a701-5ab7a430374f] Running
	I0422 11:10:06.566538   27717 system_pods.go:74] duration metric: took 208.894811ms to wait for pod list to return data ...
	I0422 11:10:06.566555   27717 default_sa.go:34] waiting for default service account to be created ...
	I0422 11:10:06.728319   27717 request.go:629] Waited for 161.692929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/default/serviceaccounts
	I0422 11:10:06.728371   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/default/serviceaccounts
	I0422 11:10:06.728376   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:06.728383   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:06.728387   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:06.731764   27717 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 11:10:06.731867   27717 default_sa.go:45] found service account: "default"
	I0422 11:10:06.731884   27717 default_sa.go:55] duration metric: took 165.321362ms for default service account to be created ...
	I0422 11:10:06.731893   27717 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 11:10:06.928504   27717 request.go:629] Waited for 196.544322ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0422 11:10:06.928576   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/namespaces/kube-system/pods
	I0422 11:10:06.928582   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:06.928593   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:06.928597   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:06.936268   27717 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0422 11:10:06.943113   27717 system_pods.go:86] 24 kube-system pods found
	I0422 11:10:06.943142   27717 system_pods.go:89] "coredns-7db6d8ff4d-ft2jl" [09e14815-b8e9-4b60-9b2c-c7d86cccb594] Running
	I0422 11:10:06.943148   27717 system_pods.go:89] "coredns-7db6d8ff4d-ht7jl" [c404a830-ddce-4c49-9e54-05d45871b4b0] Running
	I0422 11:10:06.943152   27717 system_pods.go:89] "etcd-ha-821265" [1a27ab5d-19af-49d9-8eb3-e50b7e2225a5] Running
	I0422 11:10:06.943156   27717 system_pods.go:89] "etcd-ha-821265-m02" [4ba0de26-81d6-423b-a5a4-9fd88c90ebdc] Running
	I0422 11:10:06.943160   27717 system_pods.go:89] "etcd-ha-821265-m03" [43ef0886-3651-4313-847d-ee6cd15ec411] Running
	I0422 11:10:06.943164   27717 system_pods.go:89] "kindnet-d8qgr" [ec965a08-bffa-46ef-8edf-a3f29cb9b474] Running
	I0422 11:10:06.943168   27717 system_pods.go:89] "kindnet-jm2pd" [0550a9db-b106-4ac4-9976-118d80927509] Running
	I0422 11:10:06.943172   27717 system_pods.go:89] "kindnet-qbq9z" [9751a17f-e26b-4ba8-81ce-077103c0aa1c] Running
	I0422 11:10:06.943176   27717 system_pods.go:89] "kube-apiserver-ha-821265" [1e20fb49-c54d-49fd-900b-38e347a52f9a] Running
	I0422 11:10:06.943180   27717 system_pods.go:89] "kube-apiserver-ha-821265-m02" [95616042-7a05-4fc3-a1ef-7fd56c8b3cd8] Running
	I0422 11:10:06.943183   27717 system_pods.go:89] "kube-apiserver-ha-821265-m03" [d2cd8a48-ff79-48cd-9096-99c240d07879] Running
	I0422 11:10:06.943187   27717 system_pods.go:89] "kube-controller-manager-ha-821265" [51933fc1-af7c-4fb0-b811-b6312f4b4d29] Running
	I0422 11:10:06.943193   27717 system_pods.go:89] "kube-controller-manager-ha-821265-m02" [4af2c432-4c7c-4f1f-98da-34af2648d7db] Running
	I0422 11:10:06.943200   27717 system_pods.go:89] "kube-controller-manager-ha-821265-m03" [06ea7b1f-409d-43a6-9493-bc4c24f3f536] Running
	I0422 11:10:06.943205   27717 system_pods.go:89] "kube-proxy-j2hpk" [3ebf4ab0-bc76-4f5c-916e-6b28a81dc031] Running
	I0422 11:10:06.943208   27717 system_pods.go:89] "kube-proxy-lmhp7" [45383871-e744-4764-823a-060a498ebc51] Running
	I0422 11:10:06.943212   27717 system_pods.go:89] "kube-proxy-w7r9d" [56a4f7fc-5ce0-4d77-b30f-9d39cded457c] Running
	I0422 11:10:06.943215   27717 system_pods.go:89] "kube-scheduler-ha-821265" [929e0c00-c49a-4b96-8f6a-7a84ae4f117c] Running
	I0422 11:10:06.943219   27717 system_pods.go:89] "kube-scheduler-ha-821265-m02" [589c30c7-d9df-4745-bdb3-87ae02ab2b67] Running
	I0422 11:10:06.943223   27717 system_pods.go:89] "kube-scheduler-ha-821265-m03" [d57674c8-cc46-4da5-9be1-01675f656b35] Running
	I0422 11:10:06.943227   27717 system_pods.go:89] "kube-vip-ha-821265" [9322f0ee-9e3e-4585-9388-44ccd1417371] Running
	I0422 11:10:06.943230   27717 system_pods.go:89] "kube-vip-ha-821265-m02" [466697de-7dbe-4e6c-be95-9463a9548cde] Running
	I0422 11:10:06.943234   27717 system_pods.go:89] "kube-vip-ha-821265-m03" [a4b446ae-5369-4b1e-bd82-be6fb4110c4c] Running
	I0422 11:10:06.943237   27717 system_pods.go:89] "storage-provisioner" [4b44da93-f3fa-49b7-a701-5ab7a430374f] Running
	I0422 11:10:06.943247   27717 system_pods.go:126] duration metric: took 211.344123ms to wait for k8s-apps to be running ...
	I0422 11:10:06.943254   27717 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 11:10:06.943298   27717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:10:06.960136   27717 system_svc.go:56] duration metric: took 16.870275ms WaitForService to wait for kubelet
	I0422 11:10:06.960172   27717 kubeadm.go:576] duration metric: took 20.177382765s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 11:10:06.960195   27717 node_conditions.go:102] verifying NodePressure condition ...
	I0422 11:10:07.128853   27717 request.go:629] Waited for 168.556002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.150:8443/api/v1/nodes
	I0422 11:10:07.128909   27717 round_trippers.go:463] GET https://192.168.39.150:8443/api/v1/nodes
	I0422 11:10:07.128913   27717 round_trippers.go:469] Request Headers:
	I0422 11:10:07.128920   27717 round_trippers.go:473]     Accept: application/json, */*
	I0422 11:10:07.128924   27717 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 11:10:07.134203   27717 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0422 11:10:07.136104   27717 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 11:10:07.136122   27717 node_conditions.go:123] node cpu capacity is 2
	I0422 11:10:07.136131   27717 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 11:10:07.136135   27717 node_conditions.go:123] node cpu capacity is 2
	I0422 11:10:07.136138   27717 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 11:10:07.136141   27717 node_conditions.go:123] node cpu capacity is 2
	I0422 11:10:07.136145   27717 node_conditions.go:105] duration metric: took 175.945498ms to run NodePressure ...
	I0422 11:10:07.136156   27717 start.go:240] waiting for startup goroutines ...
	I0422 11:10:07.136173   27717 start.go:254] writing updated cluster config ...
	I0422 11:10:07.136460   27717 ssh_runner.go:195] Run: rm -f paused
	I0422 11:10:07.188977   27717 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0422 11:10:07.191080   27717 out.go:177] * Done! kubectl is now configured to use "ha-821265" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 22 11:14:37 ha-821265 crio[676]: time="2024-04-22 11:14:37.559830998Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b6bb9a65-b7aa-47b1-b3d1-b3330aa872ef name=/runtime.v1.RuntimeService/Version
	Apr 22 11:14:37 ha-821265 crio[676]: time="2024-04-22 11:14:37.561012623Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8755584b-c5ae-4930-bf63-ca385a11a04d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:14:37 ha-821265 crio[676]: time="2024-04-22 11:14:37.561454732Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713784477561431632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8755584b-c5ae-4930-bf63-ca385a11a04d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:14:37 ha-821265 crio[676]: time="2024-04-22 11:14:37.562161198Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f6e8cd79-7714-468b-aa30-1c7032aedc56 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:14:37 ha-821265 crio[676]: time="2024-04-22 11:14:37.562216893Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f6e8cd79-7714-468b-aa30-1c7032aedc56 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:14:37 ha-821265 crio[676]: time="2024-04-22 11:14:37.562470551Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f9e45e23c690bb79c7fd65070b3188b60b1c0041e0955b10386851453d93e8c2,PodSandboxId:82d54024bc68a08eee3c2cc0b18e7fb33cd099191b5f7459c47109f97a3f7592,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713784211175253270,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b4r5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1670d513-9071-4ee0-ae1b-7600c98019b8,},Annotations:map[string]string{io.kubernetes.container.hash: 5113ac6f,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28dbe3373b660061d706be023eb9515318a583f0f1fb735faf35e2fffc13f391,PodSandboxId:126db08ea55aca85342e8b7f3c944b3e420d06d55410be6b5b8c83ed8aaea027,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713784060436897502,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ht7jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c404a830-ddce-4c49-9e54-05d45871b4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1d0fb98b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:609e2855f754ce317a2cad749d16f773349ffc248725fb9417b473c9be8df139,PodSandboxId:84aaf42f76a8a064784395ee92d65a6be9d6ddc96fb911530ab4ab1c12faefa1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713784060349824691,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ft2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
09e14815-b8e9-4b60-9b2c-c7d86cccb594,},Annotations:map[string]string{io.kubernetes.container.hash: f5070328,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60306e6c18db97251960e340b26fd7591b71b65493a6e0603cccec3458948a44,PodSandboxId:b2f58af56b111bfad58560278e986cc2852b5ea20e89eb68900084ce537ba0be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1713784060256832706,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b44da93-f3fa-49b7-a701-5ab7a430374f,},Annotations:map[string]string{io.kubernetes.container.hash: 97b65705,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68514e3b402ea6cc11c51909fb9a2918a4580e62c5d019c9280d5fd40c8408cf,PodSandboxId:5694d3bdc4521fd36b2ea53baa3bd587487c1067d997f850c02dbe873a1776c7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17137840
58198287836,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qbq9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751a17f-e26b-4ba8-81ce-077103c0aa1c,},Annotations:map[string]string{io.kubernetes.container.hash: f5b26a55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f43ea569f86c6da0221ad11b84975f3fab68f9999821a395b05d1afb49f5269,PodSandboxId:626e64c737b2d764452e83cdf097ca6fc3248d79c58ccd5a488c8986fdfb101d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713784057949961038,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7r9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a4f7fc-5ce0-4d77-b30f-9d39cded457c,},Annotations:map[string]string{io.kubernetes.container.hash: bf585fa1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26ec191f8bcbef49468ef3d9b903de2da840c90478ee97540859b8f37f581f1,PodSandboxId:c0bfe906cafdccf860bd19ca9d4e03e86c477589df6194923b8485f838400aad,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713784038374193381,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a9d642b5b95959b9f509e42995bd869,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3935bd9c893d367d1a96bbfe83c0b1d50125bfe3272fe02c29b282d1d35de5,PodSandboxId:68a372e9f954bec85212f490bbd41d4da504f0947a8f1e065b8dc63d7cf5db88,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713784035610618080,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d47cc377f7ae04e53a8145721f1411a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:652741477fa90fca19fc111b1191a6acd0e2edcee141e389e5fd84f6018ec38e,PodSandboxId:e36b4c8b43c66a7d4a5f4c59ce3a0900d5545b5ef014af353b925a642266dc96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713784035573890703,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7e7ddac3eb004675c7add1d1e064dc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cbf52d94248bdbe7ca0e2622c441a457f4747f2d8e8969d25f7b6e629e1b566,PodSandboxId:9de13b553c43b35e9aa30be717e083ac22af034f154d3238f9af3b74b9cfa0e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713784035468971146,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2b58b303a812e19616ac42b0b60aae,},Annotations:map[string]string{io.kubernetes.container.hash: 4b37b5d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49f85435f20c5e5f43a285a09c2344ab0eb98efa1efcef40308fec44a77803,PodSandboxId:f773251009c17f15bd2065d44e9976fe2579a48750872b77f082f3b37a1a5747,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713784035389146521,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-821265,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68bde0d14316a4c3a901fddeacfd54a,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9fd198,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f6e8cd79-7714-468b-aa30-1c7032aedc56 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:14:37 ha-821265 crio[676]: time="2024-04-22 11:14:37.584868126Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=53d26887-534b-4efc-9810-ef6b58fb1464 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 22 11:14:37 ha-821265 crio[676]: time="2024-04-22 11:14:37.585132873Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:82d54024bc68a08eee3c2cc0b18e7fb33cd099191b5f7459c47109f97a3f7592,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-b4r5w,Uid:1670d513-9071-4ee0-ae1b-7600c98019b8,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713784208510394123,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-b4r5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1670d513-9071-4ee0-ae1b-7600c98019b8,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T11:10:08.190426540Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:84aaf42f76a8a064784395ee92d65a6be9d6ddc96fb911530ab4ab1c12faefa1,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-ft2jl,Uid:09e14815-b8e9-4b60-9b2c-c7d86cccb594,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1713784060091226452,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-ft2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e14815-b8e9-4b60-9b2c-c7d86cccb594,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T11:07:39.776188091Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b2f58af56b111bfad58560278e986cc2852b5ea20e89eb68900084ce537ba0be,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:4b44da93-f3fa-49b7-a701-5ab7a430374f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713784060081108667,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b44da93-f3fa-49b7-a701-5ab7a430374f,},Annotations:map[string]string{kubec
tl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-22T11:07:39.772838141Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:126db08ea55aca85342e8b7f3c944b3e420d06d55410be6b5b8c83ed8aaea027,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-ht7jl,Uid:c404a830-ddce-4c49-9e54-05d45871b4b0,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1713784060074349957,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-ht7jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c404a830-ddce-4c49-9e54-05d45871b4b0,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T11:07:39.765172392Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:626e64c737b2d764452e83cdf097ca6fc3248d79c58ccd5a488c8986fdfb101d,Metadata:&PodSandboxMetadata{Name:kube-proxy-w7r9d,Uid:56a4f7fc-5ce0-4d77-b30f-9d39cded457c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713784057626461096,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-w7r9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a4f7fc-5ce0-4d77-b30f-9d39cded457c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-04-22T11:07:37.297402525Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5694d3bdc4521fd36b2ea53baa3bd587487c1067d997f850c02dbe873a1776c7,Metadata:&PodSandboxMetadata{Name:kindnet-qbq9z,Uid:9751a17f-e26b-4ba8-81ce-077103c0aa1c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713784057618368875,Labels:map[string]string{app: kindnet,controller-revision-hash: 64fdfd5c6d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-qbq9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751a17f-e26b-4ba8-81ce-077103c0aa1c,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T11:07:37.284615771Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c0bfe906cafdccf860bd19ca9d4e03e86c477589df6194923b8485f838400aad,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-821265,Uid:5a9d642b5b95959b9f509e42995bd869,Namespace:kube-system,Attempt:0,},Sta
te:SANDBOX_READY,CreatedAt:1713784035207447426,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a9d642b5b95959b9f509e42995bd869,},Annotations:map[string]string{kubernetes.io/config.hash: 5a9d642b5b95959b9f509e42995bd869,kubernetes.io/config.seen: 2024-04-22T11:07:14.725823371Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:68a372e9f954bec85212f490bbd41d4da504f0947a8f1e065b8dc63d7cf5db88,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-821265,Uid:0d47cc377f7ae04e53a8145721f1411a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713784035200313592,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d47cc377f7ae04e53a8145721f1411a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0d47
cc377f7ae04e53a8145721f1411a,kubernetes.io/config.seen: 2024-04-22T11:07:14.725822574Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9de13b553c43b35e9aa30be717e083ac22af034f154d3238f9af3b74b9cfa0e1,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-821265,Uid:0b2b58b303a812e19616ac42b0b60aae,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713784035199913810,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2b58b303a812e19616ac42b0b60aae,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.150:8443,kubernetes.io/config.hash: 0b2b58b303a812e19616ac42b0b60aae,kubernetes.io/config.seen: 2024-04-22T11:07:14.725820134Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e36b4c8b43c66a7d4a5f4c59ce3a0900d5545b5ef014af353b925a642266dc96,Met
adata:&PodSandboxMetadata{Name:kube-controller-manager-ha-821265,Uid:6e7e7ddac3eb004675c7add1d1e064dc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713784035199614881,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7e7ddac3eb004675c7add1d1e064dc,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6e7e7ddac3eb004675c7add1d1e064dc,kubernetes.io/config.seen: 2024-04-22T11:07:14.725821493Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f773251009c17f15bd2065d44e9976fe2579a48750872b77f082f3b37a1a5747,Metadata:&PodSandboxMetadata{Name:etcd-ha-821265,Uid:b68bde0d14316a4c3a901fddeacfd54a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713784035196509363,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-821265,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68bde0d14316a4c3a901fddeacfd54a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.150:2379,kubernetes.io/config.hash: b68bde0d14316a4c3a901fddeacfd54a,kubernetes.io/config.seen: 2024-04-22T11:07:14.725816107Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=53d26887-534b-4efc-9810-ef6b58fb1464 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 22 11:14:37 ha-821265 crio[676]: time="2024-04-22 11:14:37.586087610Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7fe8bb16-3e14-456f-a49a-a5b5cd16007c name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:14:37 ha-821265 crio[676]: time="2024-04-22 11:14:37.586155770Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7fe8bb16-3e14-456f-a49a-a5b5cd16007c name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:14:37 ha-821265 crio[676]: time="2024-04-22 11:14:37.586388583Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f9e45e23c690bb79c7fd65070b3188b60b1c0041e0955b10386851453d93e8c2,PodSandboxId:82d54024bc68a08eee3c2cc0b18e7fb33cd099191b5f7459c47109f97a3f7592,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713784211175253270,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b4r5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1670d513-9071-4ee0-ae1b-7600c98019b8,},Annotations:map[string]string{io.kubernetes.container.hash: 5113ac6f,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28dbe3373b660061d706be023eb9515318a583f0f1fb735faf35e2fffc13f391,PodSandboxId:126db08ea55aca85342e8b7f3c944b3e420d06d55410be6b5b8c83ed8aaea027,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713784060436897502,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ht7jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c404a830-ddce-4c49-9e54-05d45871b4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1d0fb98b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:609e2855f754ce317a2cad749d16f773349ffc248725fb9417b473c9be8df139,PodSandboxId:84aaf42f76a8a064784395ee92d65a6be9d6ddc96fb911530ab4ab1c12faefa1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713784060349824691,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ft2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
09e14815-b8e9-4b60-9b2c-c7d86cccb594,},Annotations:map[string]string{io.kubernetes.container.hash: f5070328,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60306e6c18db97251960e340b26fd7591b71b65493a6e0603cccec3458948a44,PodSandboxId:b2f58af56b111bfad58560278e986cc2852b5ea20e89eb68900084ce537ba0be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1713784060256832706,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b44da93-f3fa-49b7-a701-5ab7a430374f,},Annotations:map[string]string{io.kubernetes.container.hash: 97b65705,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68514e3b402ea6cc11c51909fb9a2918a4580e62c5d019c9280d5fd40c8408cf,PodSandboxId:5694d3bdc4521fd36b2ea53baa3bd587487c1067d997f850c02dbe873a1776c7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17137840
58198287836,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qbq9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751a17f-e26b-4ba8-81ce-077103c0aa1c,},Annotations:map[string]string{io.kubernetes.container.hash: f5b26a55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f43ea569f86c6da0221ad11b84975f3fab68f9999821a395b05d1afb49f5269,PodSandboxId:626e64c737b2d764452e83cdf097ca6fc3248d79c58ccd5a488c8986fdfb101d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713784057949961038,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7r9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a4f7fc-5ce0-4d77-b30f-9d39cded457c,},Annotations:map[string]string{io.kubernetes.container.hash: bf585fa1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26ec191f8bcbef49468ef3d9b903de2da840c90478ee97540859b8f37f581f1,PodSandboxId:c0bfe906cafdccf860bd19ca9d4e03e86c477589df6194923b8485f838400aad,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713784038374193381,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a9d642b5b95959b9f509e42995bd869,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3935bd9c893d367d1a96bbfe83c0b1d50125bfe3272fe02c29b282d1d35de5,PodSandboxId:68a372e9f954bec85212f490bbd41d4da504f0947a8f1e065b8dc63d7cf5db88,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713784035610618080,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d47cc377f7ae04e53a8145721f1411a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:652741477fa90fca19fc111b1191a6acd0e2edcee141e389e5fd84f6018ec38e,PodSandboxId:e36b4c8b43c66a7d4a5f4c59ce3a0900d5545b5ef014af353b925a642266dc96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713784035573890703,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7e7ddac3eb004675c7add1d1e064dc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cbf52d94248bdbe7ca0e2622c441a457f4747f2d8e8969d25f7b6e629e1b566,PodSandboxId:9de13b553c43b35e9aa30be717e083ac22af034f154d3238f9af3b74b9cfa0e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713784035468971146,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2b58b303a812e19616ac42b0b60aae,},Annotations:map[string]string{io.kubernetes.container.hash: 4b37b5d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49f85435f20c5e5f43a285a09c2344ab0eb98efa1efcef40308fec44a77803,PodSandboxId:f773251009c17f15bd2065d44e9976fe2579a48750872b77f082f3b37a1a5747,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713784035389146521,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-821265,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68bde0d14316a4c3a901fddeacfd54a,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9fd198,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7fe8bb16-3e14-456f-a49a-a5b5cd16007c name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:14:37 ha-821265 crio[676]: time="2024-04-22 11:14:37.608387377Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7d3acbd5-f496-466a-a1a5-bd2e6d028bfd name=/runtime.v1.RuntimeService/Version
	Apr 22 11:14:37 ha-821265 crio[676]: time="2024-04-22 11:14:37.608462359Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7d3acbd5-f496-466a-a1a5-bd2e6d028bfd name=/runtime.v1.RuntimeService/Version
	Apr 22 11:14:37 ha-821265 crio[676]: time="2024-04-22 11:14:37.609821610Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=81b4157f-0a65-4b17-a8e0-0f63df504719 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:14:37 ha-821265 crio[676]: time="2024-04-22 11:14:37.610911113Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713784477610885204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=81b4157f-0a65-4b17-a8e0-0f63df504719 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:14:37 ha-821265 crio[676]: time="2024-04-22 11:14:37.611683318Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6870d887-f289-4641-92bb-15ba3fa3cbea name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:14:37 ha-821265 crio[676]: time="2024-04-22 11:14:37.611760563Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6870d887-f289-4641-92bb-15ba3fa3cbea name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:14:37 ha-821265 crio[676]: time="2024-04-22 11:14:37.612001420Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f9e45e23c690bb79c7fd65070b3188b60b1c0041e0955b10386851453d93e8c2,PodSandboxId:82d54024bc68a08eee3c2cc0b18e7fb33cd099191b5f7459c47109f97a3f7592,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713784211175253270,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b4r5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1670d513-9071-4ee0-ae1b-7600c98019b8,},Annotations:map[string]string{io.kubernetes.container.hash: 5113ac6f,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28dbe3373b660061d706be023eb9515318a583f0f1fb735faf35e2fffc13f391,PodSandboxId:126db08ea55aca85342e8b7f3c944b3e420d06d55410be6b5b8c83ed8aaea027,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713784060436897502,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ht7jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c404a830-ddce-4c49-9e54-05d45871b4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1d0fb98b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:609e2855f754ce317a2cad749d16f773349ffc248725fb9417b473c9be8df139,PodSandboxId:84aaf42f76a8a064784395ee92d65a6be9d6ddc96fb911530ab4ab1c12faefa1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713784060349824691,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ft2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
09e14815-b8e9-4b60-9b2c-c7d86cccb594,},Annotations:map[string]string{io.kubernetes.container.hash: f5070328,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60306e6c18db97251960e340b26fd7591b71b65493a6e0603cccec3458948a44,PodSandboxId:b2f58af56b111bfad58560278e986cc2852b5ea20e89eb68900084ce537ba0be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1713784060256832706,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b44da93-f3fa-49b7-a701-5ab7a430374f,},Annotations:map[string]string{io.kubernetes.container.hash: 97b65705,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68514e3b402ea6cc11c51909fb9a2918a4580e62c5d019c9280d5fd40c8408cf,PodSandboxId:5694d3bdc4521fd36b2ea53baa3bd587487c1067d997f850c02dbe873a1776c7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17137840
58198287836,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qbq9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751a17f-e26b-4ba8-81ce-077103c0aa1c,},Annotations:map[string]string{io.kubernetes.container.hash: f5b26a55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f43ea569f86c6da0221ad11b84975f3fab68f9999821a395b05d1afb49f5269,PodSandboxId:626e64c737b2d764452e83cdf097ca6fc3248d79c58ccd5a488c8986fdfb101d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713784057949961038,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7r9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a4f7fc-5ce0-4d77-b30f-9d39cded457c,},Annotations:map[string]string{io.kubernetes.container.hash: bf585fa1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26ec191f8bcbef49468ef3d9b903de2da840c90478ee97540859b8f37f581f1,PodSandboxId:c0bfe906cafdccf860bd19ca9d4e03e86c477589df6194923b8485f838400aad,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713784038374193381,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a9d642b5b95959b9f509e42995bd869,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3935bd9c893d367d1a96bbfe83c0b1d50125bfe3272fe02c29b282d1d35de5,PodSandboxId:68a372e9f954bec85212f490bbd41d4da504f0947a8f1e065b8dc63d7cf5db88,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713784035610618080,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d47cc377f7ae04e53a8145721f1411a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:652741477fa90fca19fc111b1191a6acd0e2edcee141e389e5fd84f6018ec38e,PodSandboxId:e36b4c8b43c66a7d4a5f4c59ce3a0900d5545b5ef014af353b925a642266dc96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713784035573890703,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7e7ddac3eb004675c7add1d1e064dc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cbf52d94248bdbe7ca0e2622c441a457f4747f2d8e8969d25f7b6e629e1b566,PodSandboxId:9de13b553c43b35e9aa30be717e083ac22af034f154d3238f9af3b74b9cfa0e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713784035468971146,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2b58b303a812e19616ac42b0b60aae,},Annotations:map[string]string{io.kubernetes.container.hash: 4b37b5d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49f85435f20c5e5f43a285a09c2344ab0eb98efa1efcef40308fec44a77803,PodSandboxId:f773251009c17f15bd2065d44e9976fe2579a48750872b77f082f3b37a1a5747,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713784035389146521,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-821265,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68bde0d14316a4c3a901fddeacfd54a,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9fd198,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6870d887-f289-4641-92bb-15ba3fa3cbea name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:14:37 ha-821265 crio[676]: time="2024-04-22 11:14:37.660894700Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7b29bb6f-e54d-445c-bd19-c5741a24544f name=/runtime.v1.RuntimeService/Version
	Apr 22 11:14:37 ha-821265 crio[676]: time="2024-04-22 11:14:37.660986745Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7b29bb6f-e54d-445c-bd19-c5741a24544f name=/runtime.v1.RuntimeService/Version
	Apr 22 11:14:37 ha-821265 crio[676]: time="2024-04-22 11:14:37.663097842Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0f593dbe-6836-412d-9064-6fc5d8b92bc3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:14:37 ha-821265 crio[676]: time="2024-04-22 11:14:37.664220939Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713784477663765518,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0f593dbe-6836-412d-9064-6fc5d8b92bc3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:14:37 ha-821265 crio[676]: time="2024-04-22 11:14:37.665029614Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=810dff4a-8f83-424a-b03d-5499bb886945 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:14:37 ha-821265 crio[676]: time="2024-04-22 11:14:37.665078702Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=810dff4a-8f83-424a-b03d-5499bb886945 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:14:37 ha-821265 crio[676]: time="2024-04-22 11:14:37.665312513Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f9e45e23c690bb79c7fd65070b3188b60b1c0041e0955b10386851453d93e8c2,PodSandboxId:82d54024bc68a08eee3c2cc0b18e7fb33cd099191b5f7459c47109f97a3f7592,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713784211175253270,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b4r5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1670d513-9071-4ee0-ae1b-7600c98019b8,},Annotations:map[string]string{io.kubernetes.container.hash: 5113ac6f,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28dbe3373b660061d706be023eb9515318a583f0f1fb735faf35e2fffc13f391,PodSandboxId:126db08ea55aca85342e8b7f3c944b3e420d06d55410be6b5b8c83ed8aaea027,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713784060436897502,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ht7jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c404a830-ddce-4c49-9e54-05d45871b4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1d0fb98b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:609e2855f754ce317a2cad749d16f773349ffc248725fb9417b473c9be8df139,PodSandboxId:84aaf42f76a8a064784395ee92d65a6be9d6ddc96fb911530ab4ab1c12faefa1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713784060349824691,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ft2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
09e14815-b8e9-4b60-9b2c-c7d86cccb594,},Annotations:map[string]string{io.kubernetes.container.hash: f5070328,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60306e6c18db97251960e340b26fd7591b71b65493a6e0603cccec3458948a44,PodSandboxId:b2f58af56b111bfad58560278e986cc2852b5ea20e89eb68900084ce537ba0be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1713784060256832706,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b44da93-f3fa-49b7-a701-5ab7a430374f,},Annotations:map[string]string{io.kubernetes.container.hash: 97b65705,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68514e3b402ea6cc11c51909fb9a2918a4580e62c5d019c9280d5fd40c8408cf,PodSandboxId:5694d3bdc4521fd36b2ea53baa3bd587487c1067d997f850c02dbe873a1776c7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17137840
58198287836,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qbq9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751a17f-e26b-4ba8-81ce-077103c0aa1c,},Annotations:map[string]string{io.kubernetes.container.hash: f5b26a55,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f43ea569f86c6da0221ad11b84975f3fab68f9999821a395b05d1afb49f5269,PodSandboxId:626e64c737b2d764452e83cdf097ca6fc3248d79c58ccd5a488c8986fdfb101d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713784057949961038,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7r9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a4f7fc-5ce0-4d77-b30f-9d39cded457c,},Annotations:map[string]string{io.kubernetes.container.hash: bf585fa1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26ec191f8bcbef49468ef3d9b903de2da840c90478ee97540859b8f37f581f1,PodSandboxId:c0bfe906cafdccf860bd19ca9d4e03e86c477589df6194923b8485f838400aad,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713784038374193381,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a9d642b5b95959b9f509e42995bd869,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3935bd9c893d367d1a96bbfe83c0b1d50125bfe3272fe02c29b282d1d35de5,PodSandboxId:68a372e9f954bec85212f490bbd41d4da504f0947a8f1e065b8dc63d7cf5db88,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713784035610618080,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d47cc377f7ae04e53a8145721f1411a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:652741477fa90fca19fc111b1191a6acd0e2edcee141e389e5fd84f6018ec38e,PodSandboxId:e36b4c8b43c66a7d4a5f4c59ce3a0900d5545b5ef014af353b925a642266dc96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713784035573890703,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7e7ddac3eb004675c7add1d1e064dc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cbf52d94248bdbe7ca0e2622c441a457f4747f2d8e8969d25f7b6e629e1b566,PodSandboxId:9de13b553c43b35e9aa30be717e083ac22af034f154d3238f9af3b74b9cfa0e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713784035468971146,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2b58b303a812e19616ac42b0b60aae,},Annotations:map[string]string{io.kubernetes.container.hash: 4b37b5d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49f85435f20c5e5f43a285a09c2344ab0eb98efa1efcef40308fec44a77803,PodSandboxId:f773251009c17f15bd2065d44e9976fe2579a48750872b77f082f3b37a1a5747,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713784035389146521,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-821265,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68bde0d14316a4c3a901fddeacfd54a,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9fd198,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=810dff4a-8f83-424a-b03d-5499bb886945 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f9e45e23c690b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   82d54024bc68a       busybox-fc5497c4f-b4r5w
	28dbe3373b660       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   126db08ea55ac       coredns-7db6d8ff4d-ht7jl
	609e2855f754c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   84aaf42f76a8a       coredns-7db6d8ff4d-ft2jl
	60306e6c18db9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   b2f58af56b111       storage-provisioner
	68514e3b402ea       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      6 minutes ago       Running             kindnet-cni               0                   5694d3bdc4521       kindnet-qbq9z
	1f43ea569f86c       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      6 minutes ago       Running             kube-proxy                0                   626e64c737b2d       kube-proxy-w7r9d
	a26ec191f8bcb       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     7 minutes ago       Running             kube-vip                  0                   c0bfe906cafdc       kube-vip-ha-821265
	2b3935bd9c893       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      7 minutes ago       Running             kube-scheduler            0                   68a372e9f954b       kube-scheduler-ha-821265
	652741477fa90       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      7 minutes ago       Running             kube-controller-manager   0                   e36b4c8b43c66       kube-controller-manager-ha-821265
	7cbf52d94248b       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      7 minutes ago       Running             kube-apiserver            0                   9de13b553c43b       kube-apiserver-ha-821265
	ba49f85435f20       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   f773251009c17       etcd-ha-821265
	
	
	==> coredns [28dbe3373b660061d706be023eb9515318a583f0f1fb735faf35e2fffc13f391] <==
	[INFO] 10.244.0.4:44847 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000140809s
	[INFO] 10.244.0.4:35521 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000203677s
	[INFO] 10.244.1.2:55855 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000292202s
	[INFO] 10.244.1.2:40525 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001858622s
	[INFO] 10.244.1.2:43358 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000160709s
	[INFO] 10.244.1.2:55629 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000195731s
	[INFO] 10.244.1.2:44290 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121655s
	[INFO] 10.244.1.2:57358 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121564s
	[INFO] 10.244.2.2:59048 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159182s
	[INFO] 10.244.2.2:35567 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001954066s
	[INFO] 10.244.2.2:51799 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000221645s
	[INFO] 10.244.2.2:34300 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001398818s
	[INFO] 10.244.2.2:44605 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141089s
	[INFO] 10.244.2.2:60699 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114317s
	[INFO] 10.244.2.2:47652 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110384s
	[INFO] 10.244.0.4:58761 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147629s
	[INFO] 10.244.0.4:45372 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000061515s
	[INFO] 10.244.1.2:39990 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000301231s
	[INFO] 10.244.2.2:38384 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218658s
	[INFO] 10.244.2.2:42087 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096499s
	[INFO] 10.244.2.2:46418 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091631s
	[INFO] 10.244.0.4:38705 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000140004s
	[INFO] 10.244.2.2:47355 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124377s
	[INFO] 10.244.2.2:41383 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000176022s
	[INFO] 10.244.2.2:36036 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000263019s
	
	
	==> coredns [609e2855f754ce317a2cad749d16f773349ffc248725fb9417b473c9be8df139] <==
	[INFO] 127.0.0.1:56528 - 52490 "HINFO IN 6584900057141735052.5629882702753792788. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017831721s
	[INFO] 10.244.0.4:39057 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000647552s
	[INFO] 10.244.0.4:33128 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.014559084s
	[INFO] 10.244.1.2:55844 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178035s
	[INFO] 10.244.2.2:56677 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000145596s
	[INFO] 10.244.2.2:55471 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000502508s
	[INFO] 10.244.0.4:48892 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000180363s
	[INFO] 10.244.0.4:39631 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00015636s
	[INFO] 10.244.1.2:41139 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001436054s
	[INFO] 10.244.1.2:50039 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000238831s
	[INFO] 10.244.2.2:49593 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099929s
	[INFO] 10.244.0.4:33617 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078273s
	[INFO] 10.244.0.4:35287 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000154317s
	[INFO] 10.244.1.2:52682 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133804s
	[INFO] 10.244.1.2:40594 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130792s
	[INFO] 10.244.1.2:39775 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009509s
	[INFO] 10.244.2.2:55863 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00021768s
	[INFO] 10.244.0.4:36835 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092568s
	[INFO] 10.244.0.4:53708 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00016929s
	[INFO] 10.244.0.4:44024 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000203916s
	[INFO] 10.244.1.2:50167 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158884s
	[INFO] 10.244.1.2:49103 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000120664s
	[INFO] 10.244.1.2:44739 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000212444s
	[INFO] 10.244.1.2:43569 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000207516s
	[INFO] 10.244.2.2:48876 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000228682s
	
	
	==> describe nodes <==
	Name:               ha-821265
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-821265
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437
	                    minikube.k8s.io/name=ha-821265
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_22T11_07_22_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 11:07:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-821265
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 11:14:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 11:10:25 +0000   Mon, 22 Apr 2024 11:07:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 11:10:25 +0000   Mon, 22 Apr 2024 11:07:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 11:10:25 +0000   Mon, 22 Apr 2024 11:07:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 11:10:25 +0000   Mon, 22 Apr 2024 11:07:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.150
	  Hostname:    ha-821265
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e3708e3d49144fe9a219d30c45824055
	  System UUID:                e3708e3d-4914-4fe9-a219-d30c45824055
	  Boot ID:                    59d6bf31-99bc-4f8f-942a-1d3384515d3f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-b4r5w              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 coredns-7db6d8ff4d-ft2jl             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m1s
	  kube-system                 coredns-7db6d8ff4d-ht7jl             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m1s
	  kube-system                 etcd-ha-821265                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m17s
	  kube-system                 kindnet-qbq9z                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m1s
	  kube-system                 kube-apiserver-ha-821265             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m17s
	  kube-system                 kube-controller-manager-ha-821265    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m19s
	  kube-system                 kube-proxy-w7r9d                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m1s
	  kube-system                 kube-scheduler-ha-821265             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m17s
	  kube-system                 kube-vip-ha-821265                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m19s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m59s                  kube-proxy       
	  Normal  NodeHasSufficientPID     7m24s (x7 over 7m24s)  kubelet          Node ha-821265 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m24s (x8 over 7m24s)  kubelet          Node ha-821265 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m24s (x8 over 7m24s)  kubelet          Node ha-821265 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m17s                  kubelet          Node ha-821265 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m17s                  kubelet          Node ha-821265 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m17s                  kubelet          Node ha-821265 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m2s                   node-controller  Node ha-821265 event: Registered Node ha-821265 in Controller
	  Normal  NodeReady                6m59s                  kubelet          Node ha-821265 status is now: NodeReady
	  Normal  RegisteredNode           5m51s                  node-controller  Node ha-821265 event: Registered Node ha-821265 in Controller
	  Normal  RegisteredNode           4m37s                  node-controller  Node ha-821265 event: Registered Node ha-821265 in Controller
	
	
	Name:               ha-821265-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-821265-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437
	                    minikube.k8s.io/name=ha-821265
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_22T11_08_32_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 11:08:28 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-821265-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 11:11:12 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 22 Apr 2024 11:10:31 +0000   Mon, 22 Apr 2024 11:11:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 22 Apr 2024 11:10:31 +0000   Mon, 22 Apr 2024 11:11:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 22 Apr 2024 11:10:31 +0000   Mon, 22 Apr 2024 11:11:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 22 Apr 2024 11:10:31 +0000   Mon, 22 Apr 2024 11:11:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.39
	  Hostname:    ha-821265-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee4ee33670c847d689ce31a8a149631b
	  System UUID:                ee4ee336-70c8-47d6-89ce-31a8a149631b
	  Boot ID:                    ec814c8f-fad1-48eb-83d3-5828e2f6775b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ft78k                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 etcd-ha-821265-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m8s
	  kube-system                 kindnet-jm2pd                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m10s
	  kube-system                 kube-apiserver-ha-821265-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m8s
	  kube-system                 kube-controller-manager-ha-821265-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m8s
	  kube-system                 kube-proxy-j2hpk                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	  kube-system                 kube-scheduler-ha-821265-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m8s
	  kube-system                 kube-vip-ha-821265-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m4s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  6m10s (x8 over 6m10s)  kubelet          Node ha-821265-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m10s (x8 over 6m10s)  kubelet          Node ha-821265-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m10s (x7 over 6m10s)  kubelet          Node ha-821265-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m7s                   node-controller  Node ha-821265-m02 event: Registered Node ha-821265-m02 in Controller
	  Normal  RegisteredNode           5m51s                  node-controller  Node ha-821265-m02 event: Registered Node ha-821265-m02 in Controller
	  Normal  RegisteredNode           4m37s                  node-controller  Node ha-821265-m02 event: Registered Node ha-821265-m02 in Controller
	  Normal  NodeNotReady             2m42s                  node-controller  Node ha-821265-m02 status is now: NodeNotReady
	
	
	Name:               ha-821265-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-821265-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437
	                    minikube.k8s.io/name=ha-821265
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_22T11_09_46_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 11:09:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-821265-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 11:14:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 11:10:13 +0000   Mon, 22 Apr 2024 11:09:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 11:10:13 +0000   Mon, 22 Apr 2024 11:09:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 11:10:13 +0000   Mon, 22 Apr 2024 11:09:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 11:10:13 +0000   Mon, 22 Apr 2024 11:09:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.95
	  Hostname:    ha-821265-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fae8daa600b4453d8a90a572a44f23c8
	  System UUID:                fae8daa6-00b4-453d-8a90-a572a44f23c8
	  Boot ID:                    62e4e3f8-9bb3-4147-9a5d-9ce3b8996599
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-fzcrw                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 etcd-ha-821265-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m53s
	  kube-system                 kindnet-d8qgr                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m55s
	  kube-system                 kube-apiserver-ha-821265-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-controller-manager-ha-821265-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 kube-proxy-lmhp7                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m55s
	  kube-system                 kube-scheduler-ha-821265-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-vip-ha-821265-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m49s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m55s (x8 over 4m55s)  kubelet          Node ha-821265-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m55s (x8 over 4m55s)  kubelet          Node ha-821265-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m55s (x7 over 4m55s)  kubelet          Node ha-821265-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m52s                  node-controller  Node ha-821265-m03 event: Registered Node ha-821265-m03 in Controller
	  Normal  RegisteredNode           4m51s                  node-controller  Node ha-821265-m03 event: Registered Node ha-821265-m03 in Controller
	  Normal  RegisteredNode           4m37s                  node-controller  Node ha-821265-m03 event: Registered Node ha-821265-m03 in Controller
	
	
	Name:               ha-821265-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-821265-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437
	                    minikube.k8s.io/name=ha-821265
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_22T11_10_47_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 11:10:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-821265-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 11:14:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 11:11:17 +0000   Mon, 22 Apr 2024 11:10:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 11:11:17 +0000   Mon, 22 Apr 2024 11:10:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 11:11:17 +0000   Mon, 22 Apr 2024 11:10:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 11:11:17 +0000   Mon, 22 Apr 2024 11:10:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.252
	  Hostname:    ha-821265-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 dd9646c23a234a60a7a73b7377025a34
	  System UUID:                dd9646c2-3a23-4a60-a7a7-3b7377025a34
	  Boot ID:                    5cc549d7-73b1-4fa5-ab02-659fe0409704
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-gvgbm       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m52s
	  kube-system                 kube-proxy-hdvbv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m45s                  kube-proxy       
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-821265-m04 event: Registered Node ha-821265-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m52s (x2 over 3m52s)  kubelet          Node ha-821265-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m52s (x2 over 3m52s)  kubelet          Node ha-821265-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m52s (x2 over 3m52s)  kubelet          Node ha-821265-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-821265-m04 event: Registered Node ha-821265-m04 in Controller
	  Normal  RegisteredNode           3m47s                  node-controller  Node ha-821265-m04 event: Registered Node ha-821265-m04 in Controller
	  Normal  NodeReady                3m41s                  kubelet          Node ha-821265-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr22 11:06] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053897] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043778] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.665964] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.570884] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.736860] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr22 11:07] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.062413] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064974] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.181323] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.148920] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.299663] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +4.930467] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +0.065860] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.137174] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.064357] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.162362] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.079557] kauditd_printk_skb: 79 callbacks suppressed
	[ +16.384158] kauditd_printk_skb: 21 callbacks suppressed
	[Apr22 11:08] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [ba49f85435f20c5e5f43a285a09c2344ab0eb98efa1efcef40308fec44a77803] <==
	{"level":"warn","ts":"2024-04-22T11:14:37.997765Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:14:37.998508Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:14:38.003056Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:14:38.021795Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:14:38.033842Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:14:38.044313Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:14:38.052061Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:14:38.062082Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:14:38.080833Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:14:38.086017Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:14:38.087915Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:14:38.097276Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:14:38.099712Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:14:38.110162Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:14:38.11522Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:14:38.118398Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:14:38.125412Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:14:38.133174Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:14:38.141897Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:14:38.148289Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:14:38.154809Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:14:38.165043Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:14:38.172265Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:14:38.184183Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T11:14:38.199633Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2236e2deb63504cb","from":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:14:38 up 7 min,  0 users,  load average: 0.39, 0.36, 0.19
	Linux ha-821265 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [68514e3b402ea6cc11c51909fb9a2918a4580e62c5d019c9280d5fd40c8408cf] <==
	I0422 11:14:00.134368       1 main.go:250] Node ha-821265-m04 has CIDR [10.244.3.0/24] 
	I0422 11:14:10.141224       1 main.go:223] Handling node with IPs: map[192.168.39.150:{}]
	I0422 11:14:10.141275       1 main.go:227] handling current node
	I0422 11:14:10.141287       1 main.go:223] Handling node with IPs: map[192.168.39.39:{}]
	I0422 11:14:10.141293       1 main.go:250] Node ha-821265-m02 has CIDR [10.244.1.0/24] 
	I0422 11:14:10.141394       1 main.go:223] Handling node with IPs: map[192.168.39.95:{}]
	I0422 11:14:10.141398       1 main.go:250] Node ha-821265-m03 has CIDR [10.244.2.0/24] 
	I0422 11:14:10.141435       1 main.go:223] Handling node with IPs: map[192.168.39.252:{}]
	I0422 11:14:10.141439       1 main.go:250] Node ha-821265-m04 has CIDR [10.244.3.0/24] 
	I0422 11:14:20.147777       1 main.go:223] Handling node with IPs: map[192.168.39.150:{}]
	I0422 11:14:20.147823       1 main.go:227] handling current node
	I0422 11:14:20.147833       1 main.go:223] Handling node with IPs: map[192.168.39.39:{}]
	I0422 11:14:20.147839       1 main.go:250] Node ha-821265-m02 has CIDR [10.244.1.0/24] 
	I0422 11:14:20.147945       1 main.go:223] Handling node with IPs: map[192.168.39.95:{}]
	I0422 11:14:20.147975       1 main.go:250] Node ha-821265-m03 has CIDR [10.244.2.0/24] 
	I0422 11:14:20.148020       1 main.go:223] Handling node with IPs: map[192.168.39.252:{}]
	I0422 11:14:20.148025       1 main.go:250] Node ha-821265-m04 has CIDR [10.244.3.0/24] 
	I0422 11:14:30.159320       1 main.go:223] Handling node with IPs: map[192.168.39.150:{}]
	I0422 11:14:30.159496       1 main.go:227] handling current node
	I0422 11:14:30.159612       1 main.go:223] Handling node with IPs: map[192.168.39.39:{}]
	I0422 11:14:30.159644       1 main.go:250] Node ha-821265-m02 has CIDR [10.244.1.0/24] 
	I0422 11:14:30.159883       1 main.go:223] Handling node with IPs: map[192.168.39.95:{}]
	I0422 11:14:30.159931       1 main.go:250] Node ha-821265-m03 has CIDR [10.244.2.0/24] 
	I0422 11:14:30.159991       1 main.go:223] Handling node with IPs: map[192.168.39.252:{}]
	I0422 11:14:30.160009       1 main.go:250] Node ha-821265-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [7cbf52d94248bdbe7ca0e2622c441a457f4747f2d8e8969d25f7b6e629e1b566] <==
	I0422 11:07:20.788343       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0422 11:07:20.795514       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.150]
	I0422 11:07:20.796931       1 controller.go:615] quota admission added evaluator for: endpoints
	I0422 11:07:20.802119       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0422 11:07:21.650107       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0422 11:07:21.659779       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0422 11:07:21.695484       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0422 11:07:21.719175       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0422 11:07:37.012414       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0422 11:07:37.258019       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0422 11:10:13.058982       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54190: use of closed network connection
	E0422 11:10:13.284020       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54218: use of closed network connection
	E0422 11:10:13.515062       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54242: use of closed network connection
	E0422 11:10:13.744691       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54266: use of closed network connection
	E0422 11:10:13.958482       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54290: use of closed network connection
	E0422 11:10:14.166197       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54306: use of closed network connection
	E0422 11:10:14.367684       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54318: use of closed network connection
	E0422 11:10:14.567281       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54334: use of closed network connection
	E0422 11:10:14.774490       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54348: use of closed network connection
	E0422 11:10:15.120609       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54372: use of closed network connection
	E0422 11:10:15.327311       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54398: use of closed network connection
	E0422 11:10:15.541803       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54424: use of closed network connection
	E0422 11:10:15.747359       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54456: use of closed network connection
	E0422 11:10:15.977207       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54488: use of closed network connection
	E0422 11:10:16.174144       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54506: use of closed network connection
	
	
	==> kube-controller-manager [652741477fa90fca19fc111b1191a6acd0e2edcee141e389e5fd84f6018ec38e] <==
	I0422 11:09:43.274704       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-821265-m03\" does not exist"
	I0422 11:09:43.301064       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-821265-m03" podCIDRs=["10.244.2.0/24"]
	I0422 11:09:46.349739       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-821265-m03"
	I0422 11:10:08.190638       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.178678ms"
	I0422 11:10:08.227463       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.618097ms"
	I0422 11:10:08.337429       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="109.898774ms"
	I0422 11:10:08.582325       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="244.823805ms"
	E0422 11:10:08.582375       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0422 11:10:08.582539       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="124.399µs"
	I0422 11:10:08.600168       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="79.846µs"
	I0422 11:10:08.788992       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.683µs"
	I0422 11:10:11.450292       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.644186ms"
	I0422 11:10:11.450468       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.457µs"
	I0422 11:10:12.315500       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.427625ms"
	I0422 11:10:12.315876       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.376µs"
	I0422 11:10:12.510138       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.283117ms"
	I0422 11:10:12.510681       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.493µs"
	E0422 11:10:46.325269       1 certificate_controller.go:146] Sync csr-zbr6p failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-zbr6p": the object has been modified; please apply your changes to the latest version and try again
	I0422 11:10:46.598067       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-821265-m04\" does not exist"
	I0422 11:10:46.640138       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-821265-m04" podCIDRs=["10.244.3.0/24"]
	I0422 11:10:51.401049       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-821265-m04"
	I0422 11:10:57.926661       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-821265-m04"
	I0422 11:11:56.451193       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-821265-m04"
	I0422 11:11:56.567400       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.545461ms"
	I0422 11:11:56.568852       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.414µs"
	
	
	==> kube-proxy [1f43ea569f86c6da0221ad11b84975f3fab68f9999821a395b05d1afb49f5269] <==
	I0422 11:07:38.328400       1 server_linux.go:69] "Using iptables proxy"
	I0422 11:07:38.341241       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.150"]
	I0422 11:07:38.416689       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 11:07:38.416754       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 11:07:38.416773       1 server_linux.go:165] "Using iptables Proxier"
	I0422 11:07:38.420819       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 11:07:38.421063       1 server.go:872] "Version info" version="v1.30.0"
	I0422 11:07:38.421099       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 11:07:38.422051       1 config.go:192] "Starting service config controller"
	I0422 11:07:38.422060       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 11:07:38.422106       1 config.go:101] "Starting endpoint slice config controller"
	I0422 11:07:38.422112       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 11:07:38.423874       1 config.go:319] "Starting node config controller"
	I0422 11:07:38.423884       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 11:07:38.522915       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0422 11:07:38.522987       1 shared_informer.go:320] Caches are synced for service config
	I0422 11:07:38.524398       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2b3935bd9c893d367d1a96bbfe83c0b1d50125bfe3272fe02c29b282d1d35de5] <==
	I0422 11:07:21.619771       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0422 11:09:43.352816       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-d8qgr\": pod kindnet-d8qgr is already assigned to node \"ha-821265-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-d8qgr" node="ha-821265-m03"
	E0422 11:09:43.353000       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod ec965a08-bffa-46ef-8edf-a3f29cb9b474(kube-system/kindnet-d8qgr) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-d8qgr"
	E0422 11:09:43.353028       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-d8qgr\": pod kindnet-d8qgr is already assigned to node \"ha-821265-m03\"" pod="kube-system/kindnet-d8qgr"
	I0422 11:09:43.353079       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-d8qgr" node="ha-821265-m03"
	E0422 11:09:43.352787       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-lmhp7\": pod kube-proxy-lmhp7 is already assigned to node \"ha-821265-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-lmhp7" node="ha-821265-m03"
	E0422 11:09:43.359109       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 45383871-e744-4764-823a-060a498ebc51(kube-system/kube-proxy-lmhp7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-lmhp7"
	E0422 11:09:43.359136       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-lmhp7\": pod kube-proxy-lmhp7 is already assigned to node \"ha-821265-m03\"" pod="kube-system/kube-proxy-lmhp7"
	I0422 11:09:43.359158       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-lmhp7" node="ha-821265-m03"
	E0422 11:10:46.706330       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-wx4rp\": pod kube-proxy-wx4rp is already assigned to node \"ha-821265-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-wx4rp" node="ha-821265-m04"
	E0422 11:10:46.706533       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-wx4rp\": pod kube-proxy-wx4rp is already assigned to node \"ha-821265-m04\"" pod="kube-system/kube-proxy-wx4rp"
	E0422 11:10:46.708956       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-kfksf\": pod kindnet-kfksf is already assigned to node \"ha-821265-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-kfksf" node="ha-821265-m04"
	E0422 11:10:46.709079       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-kfksf\": pod kindnet-kfksf is already assigned to node \"ha-821265-m04\"" pod="kube-system/kindnet-kfksf"
	E0422 11:10:46.717414       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mkwbf\": pod kindnet-mkwbf is already assigned to node \"ha-821265-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-mkwbf" node="ha-821265-m04"
	E0422 11:10:46.717500       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 65fb25e8-6cff-49b8-902a-6415f2370faf(kube-system/kindnet-mkwbf) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-mkwbf"
	E0422 11:10:46.717532       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mkwbf\": pod kindnet-mkwbf is already assigned to node \"ha-821265-m04\"" pod="kube-system/kindnet-mkwbf"
	I0422 11:10:46.717622       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-mkwbf" node="ha-821265-m04"
	E0422 11:10:46.878843       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-gvgbm\": pod kindnet-gvgbm is already assigned to node \"ha-821265-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-gvgbm" node="ha-821265-m04"
	E0422 11:10:46.879083       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 2a514bff-6dea-4863-8d8a-620a7f77e011(kube-system/kindnet-gvgbm) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-gvgbm"
	E0422 11:10:46.879126       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-gvgbm\": pod kindnet-gvgbm is already assigned to node \"ha-821265-m04\"" pod="kube-system/kindnet-gvgbm"
	I0422 11:10:46.879172       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-gvgbm" node="ha-821265-m04"
	E0422 11:10:46.880623       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-lkrhg\": pod kube-proxy-lkrhg is already assigned to node \"ha-821265-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-lkrhg" node="ha-821265-m04"
	E0422 11:10:46.880696       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 1196fc23-a892-4e83-9cec-8e1a566a768a(kube-system/kube-proxy-lkrhg) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-lkrhg"
	E0422 11:10:46.880810       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-lkrhg\": pod kube-proxy-lkrhg is already assigned to node \"ha-821265-m04\"" pod="kube-system/kube-proxy-lkrhg"
	I0422 11:10:46.880880       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-lkrhg" node="ha-821265-m04"
	
	
	==> kubelet <==
	Apr 22 11:10:21 ha-821265 kubelet[1370]: E0422 11:10:21.619839    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 11:10:21 ha-821265 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 11:10:21 ha-821265 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 11:10:21 ha-821265 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 11:10:21 ha-821265 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 11:11:21 ha-821265 kubelet[1370]: E0422 11:11:21.620243    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 11:11:21 ha-821265 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 11:11:21 ha-821265 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 11:11:21 ha-821265 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 11:11:21 ha-821265 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 11:12:21 ha-821265 kubelet[1370]: E0422 11:12:21.623214    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 11:12:21 ha-821265 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 11:12:21 ha-821265 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 11:12:21 ha-821265 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 11:12:21 ha-821265 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 11:13:21 ha-821265 kubelet[1370]: E0422 11:13:21.619745    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 11:13:21 ha-821265 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 11:13:21 ha-821265 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 11:13:21 ha-821265 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 11:13:21 ha-821265 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 11:14:21 ha-821265 kubelet[1370]: E0422 11:14:21.618159    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 11:14:21 ha-821265 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 11:14:21 ha-821265 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 11:14:21 ha-821265 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 11:14:21 ha-821265 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-821265 -n ha-821265
helpers_test.go:261: (dbg) Run:  kubectl --context ha-821265 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (54.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (370.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-821265 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-821265 -v=7 --alsologtostderr
E0422 11:16:17.644605   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-821265 -v=7 --alsologtostderr: exit status 82 (2m2.728800223s)

                                                
                                                
-- stdout --
	* Stopping node "ha-821265-m04"  ...
	* Stopping node "ha-821265-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 11:14:39.751051   33538 out.go:291] Setting OutFile to fd 1 ...
	I0422 11:14:39.751180   33538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:14:39.751189   33538 out.go:304] Setting ErrFile to fd 2...
	I0422 11:14:39.751192   33538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:14:39.751365   33538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
	I0422 11:14:39.751579   33538 out.go:298] Setting JSON to false
	I0422 11:14:39.751663   33538 mustload.go:65] Loading cluster: ha-821265
	I0422 11:14:39.752014   33538 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:14:39.752108   33538 profile.go:143] Saving config to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/config.json ...
	I0422 11:14:39.752272   33538 mustload.go:65] Loading cluster: ha-821265
	I0422 11:14:39.752409   33538 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:14:39.752445   33538 stop.go:39] StopHost: ha-821265-m04
	I0422 11:14:39.752829   33538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:39.752873   33538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:39.768803   33538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40445
	I0422 11:14:39.769248   33538 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:39.769981   33538 main.go:141] libmachine: Using API Version  1
	I0422 11:14:39.770005   33538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:39.770363   33538 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:39.773146   33538 out.go:177] * Stopping node "ha-821265-m04"  ...
	I0422 11:14:39.774911   33538 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0422 11:14:39.774949   33538 main.go:141] libmachine: (ha-821265-m04) Calling .DriverName
	I0422 11:14:39.775171   33538 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0422 11:14:39.775198   33538 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHHostname
	I0422 11:14:39.778175   33538 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:14:39.778657   33538 main.go:141] libmachine: (ha-821265-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:86:f0", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:10:32 +0000 UTC Type:0 Mac:52:54:00:02:86:f0 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-821265-m04 Clientid:01:52:54:00:02:86:f0}
	I0422 11:14:39.778699   33538 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:14:39.778729   33538 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHPort
	I0422 11:14:39.778912   33538 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHKeyPath
	I0422 11:14:39.779099   33538 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHUsername
	I0422 11:14:39.779249   33538 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m04/id_rsa Username:docker}
	I0422 11:14:39.870090   33538 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0422 11:14:39.925957   33538 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0422 11:14:39.983309   33538 main.go:141] libmachine: Stopping "ha-821265-m04"...
	I0422 11:14:39.983363   33538 main.go:141] libmachine: (ha-821265-m04) Calling .GetState
	I0422 11:14:39.984878   33538 main.go:141] libmachine: (ha-821265-m04) Calling .Stop
	I0422 11:14:39.988518   33538 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 0/120
	I0422 11:14:40.989698   33538 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 1/120
	I0422 11:14:41.992128   33538 main.go:141] libmachine: (ha-821265-m04) Calling .GetState
	I0422 11:14:41.993410   33538 main.go:141] libmachine: Machine "ha-821265-m04" was stopped.
	I0422 11:14:41.993426   33538 stop.go:75] duration metric: took 2.218518277s to stop
	I0422 11:14:41.993463   33538 stop.go:39] StopHost: ha-821265-m03
	I0422 11:14:41.993727   33538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:14:41.993764   33538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:14:42.007913   33538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39775
	I0422 11:14:42.008312   33538 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:14:42.008891   33538 main.go:141] libmachine: Using API Version  1
	I0422 11:14:42.008917   33538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:14:42.009263   33538 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:14:42.011490   33538 out.go:177] * Stopping node "ha-821265-m03"  ...
	I0422 11:14:42.012685   33538 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0422 11:14:42.012712   33538 main.go:141] libmachine: (ha-821265-m03) Calling .DriverName
	I0422 11:14:42.012934   33538 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0422 11:14:42.012956   33538 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHHostname
	I0422 11:14:42.015787   33538 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:14:42.016233   33538 main.go:141] libmachine: (ha-821265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:8e:51", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:09:07 +0000 UTC Type:0 Mac:52:54:00:24:8e:51 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-821265-m03 Clientid:01:52:54:00:24:8e:51}
	I0422 11:14:42.016261   33538 main.go:141] libmachine: (ha-821265-m03) DBG | domain ha-821265-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:24:8e:51 in network mk-ha-821265
	I0422 11:14:42.016411   33538 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHPort
	I0422 11:14:42.016561   33538 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHKeyPath
	I0422 11:14:42.016715   33538 main.go:141] libmachine: (ha-821265-m03) Calling .GetSSHUsername
	I0422 11:14:42.016849   33538 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m03/id_rsa Username:docker}
	I0422 11:14:42.104646   33538 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0422 11:14:42.159284   33538 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0422 11:14:42.222485   33538 main.go:141] libmachine: Stopping "ha-821265-m03"...
	I0422 11:14:42.222519   33538 main.go:141] libmachine: (ha-821265-m03) Calling .GetState
	I0422 11:14:42.224123   33538 main.go:141] libmachine: (ha-821265-m03) Calling .Stop
	I0422 11:14:42.227645   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 0/120
	I0422 11:14:43.229921   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 1/120
	I0422 11:14:44.231304   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 2/120
	I0422 11:14:45.232629   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 3/120
	I0422 11:14:46.234058   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 4/120
	I0422 11:14:47.235999   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 5/120
	I0422 11:14:48.237776   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 6/120
	I0422 11:14:49.239806   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 7/120
	I0422 11:14:50.241458   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 8/120
	I0422 11:14:51.243051   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 9/120
	I0422 11:14:52.244831   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 10/120
	I0422 11:14:53.246428   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 11/120
	I0422 11:14:54.248170   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 12/120
	I0422 11:14:55.250438   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 13/120
	I0422 11:14:56.251827   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 14/120
	I0422 11:14:57.253716   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 15/120
	I0422 11:14:58.255515   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 16/120
	I0422 11:14:59.256914   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 17/120
	I0422 11:15:00.258308   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 18/120
	I0422 11:15:01.259635   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 19/120
	I0422 11:15:02.261465   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 20/120
	I0422 11:15:03.262974   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 21/120
	I0422 11:15:04.264234   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 22/120
	I0422 11:15:05.265814   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 23/120
	I0422 11:15:06.267265   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 24/120
	I0422 11:15:07.268768   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 25/120
	I0422 11:15:08.270260   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 26/120
	I0422 11:15:09.271655   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 27/120
	I0422 11:15:10.272962   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 28/120
	I0422 11:15:11.274325   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 29/120
	I0422 11:15:12.276324   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 30/120
	I0422 11:15:13.277664   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 31/120
	I0422 11:15:14.279036   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 32/120
	I0422 11:15:15.280346   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 33/120
	I0422 11:15:16.281975   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 34/120
	I0422 11:15:17.283868   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 35/120
	I0422 11:15:18.285498   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 36/120
	I0422 11:15:19.286953   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 37/120
	I0422 11:15:20.288337   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 38/120
	I0422 11:15:21.289699   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 39/120
	I0422 11:15:22.291507   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 40/120
	I0422 11:15:23.292879   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 41/120
	I0422 11:15:24.294266   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 42/120
	I0422 11:15:25.295642   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 43/120
	I0422 11:15:26.296863   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 44/120
	I0422 11:15:27.298828   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 45/120
	I0422 11:15:28.300184   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 46/120
	I0422 11:15:29.301652   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 47/120
	I0422 11:15:30.303074   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 48/120
	I0422 11:15:31.304494   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 49/120
	I0422 11:15:32.305772   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 50/120
	I0422 11:15:33.307174   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 51/120
	I0422 11:15:34.308531   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 52/120
	I0422 11:15:35.309864   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 53/120
	I0422 11:15:36.311369   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 54/120
	I0422 11:15:37.313021   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 55/120
	I0422 11:15:38.314345   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 56/120
	I0422 11:15:39.315880   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 57/120
	I0422 11:15:40.317272   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 58/120
	I0422 11:15:41.318602   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 59/120
	I0422 11:15:42.320377   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 60/120
	I0422 11:15:43.321584   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 61/120
	I0422 11:15:44.322950   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 62/120
	I0422 11:15:45.324351   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 63/120
	I0422 11:15:46.325850   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 64/120
	I0422 11:15:47.327478   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 65/120
	I0422 11:15:48.329557   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 66/120
	I0422 11:15:49.331277   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 67/120
	I0422 11:15:50.332792   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 68/120
	I0422 11:15:51.333996   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 69/120
	I0422 11:15:52.335824   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 70/120
	I0422 11:15:53.337055   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 71/120
	I0422 11:15:54.338428   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 72/120
	I0422 11:15:55.340205   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 73/120
	I0422 11:15:56.341604   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 74/120
	I0422 11:15:57.343356   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 75/120
	I0422 11:15:58.344852   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 76/120
	I0422 11:15:59.346907   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 77/120
	I0422 11:16:00.348057   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 78/120
	I0422 11:16:01.349375   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 79/120
	I0422 11:16:02.350970   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 80/120
	I0422 11:16:03.352440   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 81/120
	I0422 11:16:04.353788   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 82/120
	I0422 11:16:05.355398   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 83/120
	I0422 11:16:06.356935   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 84/120
	I0422 11:16:07.358803   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 85/120
	I0422 11:16:08.360270   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 86/120
	I0422 11:16:09.361762   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 87/120
	I0422 11:16:10.363064   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 88/120
	I0422 11:16:11.364602   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 89/120
	I0422 11:16:12.366545   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 90/120
	I0422 11:16:13.367865   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 91/120
	I0422 11:16:14.369286   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 92/120
	I0422 11:16:15.370791   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 93/120
	I0422 11:16:16.372195   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 94/120
	I0422 11:16:17.374086   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 95/120
	I0422 11:16:18.376072   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 96/120
	I0422 11:16:19.377523   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 97/120
	I0422 11:16:20.379200   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 98/120
	I0422 11:16:21.380676   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 99/120
	I0422 11:16:22.382092   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 100/120
	I0422 11:16:23.383506   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 101/120
	I0422 11:16:24.384962   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 102/120
	I0422 11:16:25.387282   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 103/120
	I0422 11:16:26.388728   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 104/120
	I0422 11:16:27.391130   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 105/120
	I0422 11:16:28.392551   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 106/120
	I0422 11:16:29.393966   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 107/120
	I0422 11:16:30.395979   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 108/120
	I0422 11:16:31.397597   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 109/120
	I0422 11:16:32.399508   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 110/120
	I0422 11:16:33.400835   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 111/120
	I0422 11:16:34.402192   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 112/120
	I0422 11:16:35.403978   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 113/120
	I0422 11:16:36.405361   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 114/120
	I0422 11:16:37.407089   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 115/120
	I0422 11:16:38.408402   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 116/120
	I0422 11:16:39.409673   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 117/120
	I0422 11:16:40.411307   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 118/120
	I0422 11:16:41.412550   33538 main.go:141] libmachine: (ha-821265-m03) Waiting for machine to stop 119/120
	I0422 11:16:42.413594   33538 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0422 11:16:42.413680   33538 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0422 11:16:42.415702   33538 out.go:177] 
	W0422 11:16:42.417077   33538 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0422 11:16:42.417097   33538 out.go:239] * 
	* 
	W0422 11:16:42.419371   33538 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0422 11:16:42.421679   33538 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-821265 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-821265 --wait=true -v=7 --alsologtostderr
E0422 11:16:45.328940   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
E0422 11:16:57.327754   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-821265 --wait=true -v=7 --alsologtostderr: (4m4.505669526s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-821265
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-821265 -n ha-821265
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-821265 logs -n 25: (2.140752301s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-821265 cp ha-821265-m03:/home/docker/cp-test.txt                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m02:/home/docker/cp-test_ha-821265-m03_ha-821265-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n                                                                 | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n ha-821265-m02 sudo cat                                          | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | /home/docker/cp-test_ha-821265-m03_ha-821265-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-821265 cp ha-821265-m03:/home/docker/cp-test.txt                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m04:/home/docker/cp-test_ha-821265-m03_ha-821265-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n                                                                 | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n ha-821265-m04 sudo cat                                          | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | /home/docker/cp-test_ha-821265-m03_ha-821265-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-821265 cp testdata/cp-test.txt                                                | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n                                                                 | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-821265 cp ha-821265-m04:/home/docker/cp-test.txt                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1102049705/001/cp-test_ha-821265-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n                                                                 | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-821265 cp ha-821265-m04:/home/docker/cp-test.txt                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265:/home/docker/cp-test_ha-821265-m04_ha-821265.txt                       |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n                                                                 | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n ha-821265 sudo cat                                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | /home/docker/cp-test_ha-821265-m04_ha-821265.txt                                 |           |         |         |                     |                     |
	| cp      | ha-821265 cp ha-821265-m04:/home/docker/cp-test.txt                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m02:/home/docker/cp-test_ha-821265-m04_ha-821265-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n                                                                 | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n ha-821265-m02 sudo cat                                          | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | /home/docker/cp-test_ha-821265-m04_ha-821265-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-821265 cp ha-821265-m04:/home/docker/cp-test.txt                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m03:/home/docker/cp-test_ha-821265-m04_ha-821265-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n                                                                 | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n ha-821265-m03 sudo cat                                          | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | /home/docker/cp-test_ha-821265-m04_ha-821265-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-821265 node stop m02 -v=7                                                     | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-821265 node start m02 -v=7                                                    | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:13 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-821265 -v=7                                                           | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-821265 -v=7                                                                | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-821265 --wait=true -v=7                                                    | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:16 UTC | 22 Apr 24 11:20 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-821265                                                                | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:20 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 11:16:42
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 11:16:42.486997   33971 out.go:291] Setting OutFile to fd 1 ...
	I0422 11:16:42.487233   33971 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:16:42.487241   33971 out.go:304] Setting ErrFile to fd 2...
	I0422 11:16:42.487245   33971 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:16:42.487432   33971 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
	I0422 11:16:42.488117   33971 out.go:298] Setting JSON to false
	I0422 11:16:42.489889   33971 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3546,"bootTime":1713781057,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 11:16:42.489955   33971 start.go:139] virtualization: kvm guest
	I0422 11:16:42.492400   33971 out.go:177] * [ha-821265] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 11:16:42.494287   33971 notify.go:220] Checking for updates...
	I0422 11:16:42.494297   33971 out.go:177]   - MINIKUBE_LOCATION=18711
	I0422 11:16:42.495769   33971 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 11:16:42.497132   33971 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18711-7633/kubeconfig
	I0422 11:16:42.498694   33971 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 11:16:42.500058   33971 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 11:16:42.501395   33971 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 11:16:42.503034   33971 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:16:42.503131   33971 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 11:16:42.503542   33971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:16:42.503592   33971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:16:42.518858   33971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36535
	I0422 11:16:42.519381   33971 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:16:42.519938   33971 main.go:141] libmachine: Using API Version  1
	I0422 11:16:42.519960   33971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:16:42.520314   33971 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:16:42.520543   33971 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:16:42.556170   33971 out.go:177] * Using the kvm2 driver based on existing profile
	I0422 11:16:42.557837   33971 start.go:297] selected driver: kvm2
	I0422 11:16:42.557852   33971 start.go:901] validating driver "kvm2" against &{Name:ha-821265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-82
1265 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.252 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth
:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 11:16:42.558004   33971 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 11:16:42.558318   33971 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 11:16:42.558395   33971 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18711-7633/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 11:16:42.572492   33971 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 11:16:42.573312   33971 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 11:16:42.573397   33971 cni.go:84] Creating CNI manager for ""
	I0422 11:16:42.573414   33971 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0422 11:16:42.573486   33971 start.go:340] cluster config:
	{Name:ha-821265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-821265 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.252 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 11:16:42.573638   33971 iso.go:125] acquiring lock: {Name:mkb6ac9fd17ffabc92a94047094130aad6203a95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 11:16:42.576329   33971 out.go:177] * Starting "ha-821265" primary control-plane node in "ha-821265" cluster
	I0422 11:16:42.577901   33971 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 11:16:42.577943   33971 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0422 11:16:42.577953   33971 cache.go:56] Caching tarball of preloaded images
	I0422 11:16:42.578051   33971 preload.go:173] Found /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 11:16:42.578064   33971 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 11:16:42.578195   33971 profile.go:143] Saving config to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/config.json ...
	I0422 11:16:42.578419   33971 start.go:360] acquireMachinesLock for ha-821265: {Name:mk5cb9b294e703b264c1f97ac968ffd01e93b576 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 11:16:42.578481   33971 start.go:364] duration metric: took 40.744µs to acquireMachinesLock for "ha-821265"
	I0422 11:16:42.578499   33971 start.go:96] Skipping create...Using existing machine configuration
	I0422 11:16:42.578507   33971 fix.go:54] fixHost starting: 
	I0422 11:16:42.578781   33971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:16:42.578818   33971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:16:42.592489   33971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34723
	I0422 11:16:42.592859   33971 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:16:42.593293   33971 main.go:141] libmachine: Using API Version  1
	I0422 11:16:42.593320   33971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:16:42.593624   33971 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:16:42.593827   33971 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:16:42.593995   33971 main.go:141] libmachine: (ha-821265) Calling .GetState
	I0422 11:16:42.595453   33971 fix.go:112] recreateIfNeeded on ha-821265: state=Running err=<nil>
	W0422 11:16:42.595480   33971 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 11:16:42.597619   33971 out.go:177] * Updating the running kvm2 "ha-821265" VM ...
	I0422 11:16:42.598928   33971 machine.go:94] provisionDockerMachine start ...
	I0422 11:16:42.598950   33971 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:16:42.599144   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:16:42.601351   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:16:42.601721   33971 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:16:42.601740   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:16:42.601874   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:16:42.602032   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:16:42.602160   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:16:42.602274   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:16:42.602447   33971 main.go:141] libmachine: Using SSH client type: native
	I0422 11:16:42.602660   33971 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0422 11:16:42.602671   33971 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 11:16:42.718874   33971 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-821265
	
	I0422 11:16:42.718903   33971 main.go:141] libmachine: (ha-821265) Calling .GetMachineName
	I0422 11:16:42.719177   33971 buildroot.go:166] provisioning hostname "ha-821265"
	I0422 11:16:42.719205   33971 main.go:141] libmachine: (ha-821265) Calling .GetMachineName
	I0422 11:16:42.719410   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:16:42.722145   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:16:42.722526   33971 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:16:42.722563   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:16:42.722684   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:16:42.722852   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:16:42.723032   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:16:42.723192   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:16:42.723364   33971 main.go:141] libmachine: Using SSH client type: native
	I0422 11:16:42.723553   33971 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0422 11:16:42.723568   33971 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-821265 && echo "ha-821265" | sudo tee /etc/hostname
	I0422 11:16:42.851081   33971 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-821265
	
	I0422 11:16:42.851110   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:16:42.853907   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:16:42.854315   33971 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:16:42.854353   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:16:42.854559   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:16:42.854739   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:16:42.854901   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:16:42.855050   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:16:42.855196   33971 main.go:141] libmachine: Using SSH client type: native
	I0422 11:16:42.855431   33971 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0422 11:16:42.855455   33971 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-821265' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-821265/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-821265' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 11:16:42.962501   33971 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 11:16:42.962527   33971 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18711-7633/.minikube CaCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18711-7633/.minikube}
	I0422 11:16:42.962560   33971 buildroot.go:174] setting up certificates
	I0422 11:16:42.962571   33971 provision.go:84] configureAuth start
	I0422 11:16:42.962581   33971 main.go:141] libmachine: (ha-821265) Calling .GetMachineName
	I0422 11:16:42.962854   33971 main.go:141] libmachine: (ha-821265) Calling .GetIP
	I0422 11:16:42.965480   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:16:42.965864   33971 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:16:42.965886   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:16:42.966034   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:16:42.968147   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:16:42.968482   33971 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:16:42.968507   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:16:42.968625   33971 provision.go:143] copyHostCerts
	I0422 11:16:42.968657   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem
	I0422 11:16:42.968685   33971 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem, removing ...
	I0422 11:16:42.968694   33971 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem
	I0422 11:16:42.968813   33971 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem (1123 bytes)
	I0422 11:16:42.968923   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem
	I0422 11:16:42.968950   33971 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem, removing ...
	I0422 11:16:42.968965   33971 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem
	I0422 11:16:42.969002   33971 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem (1679 bytes)
	I0422 11:16:42.969091   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem
	I0422 11:16:42.969110   33971 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem, removing ...
	I0422 11:16:42.969117   33971 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem
	I0422 11:16:42.969139   33971 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem (1078 bytes)
	I0422 11:16:42.969181   33971 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem org=jenkins.ha-821265 san=[127.0.0.1 192.168.39.150 ha-821265 localhost minikube]
	I0422 11:16:43.101270   33971 provision.go:177] copyRemoteCerts
	I0422 11:16:43.101327   33971 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 11:16:43.101348   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:16:43.103986   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:16:43.104365   33971 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:16:43.104400   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:16:43.104577   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:16:43.104854   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:16:43.105060   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:16:43.105222   33971 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:16:43.190030   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0422 11:16:43.190115   33971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 11:16:43.219245   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0422 11:16:43.219317   33971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0422 11:16:43.247742   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0422 11:16:43.247805   33971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 11:16:43.278390   33971 provision.go:87] duration metric: took 315.803849ms to configureAuth
	I0422 11:16:43.278415   33971 buildroot.go:189] setting minikube options for container-runtime
	I0422 11:16:43.278619   33971 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:16:43.278687   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:16:43.281124   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:16:43.281536   33971 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:16:43.281564   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:16:43.281697   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:16:43.281893   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:16:43.282066   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:16:43.282203   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:16:43.282378   33971 main.go:141] libmachine: Using SSH client type: native
	I0422 11:16:43.282577   33971 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0422 11:16:43.282598   33971 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 11:18:14.302771   33971 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 11:18:14.302800   33971 machine.go:97] duration metric: took 1m31.703853743s to provisionDockerMachine
	I0422 11:18:14.302814   33971 start.go:293] postStartSetup for "ha-821265" (driver="kvm2")
	I0422 11:18:14.302827   33971 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 11:18:14.302845   33971 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:18:14.303187   33971 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 11:18:14.303223   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:18:14.306285   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:18:14.306670   33971 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:18:14.306692   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:18:14.306823   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:18:14.306979   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:18:14.307136   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:18:14.307283   33971 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:18:14.388598   33971 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 11:18:14.393911   33971 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 11:18:14.393939   33971 filesync.go:126] Scanning /home/jenkins/minikube-integration/18711-7633/.minikube/addons for local assets ...
	I0422 11:18:14.394020   33971 filesync.go:126] Scanning /home/jenkins/minikube-integration/18711-7633/.minikube/files for local assets ...
	I0422 11:18:14.394117   33971 filesync.go:149] local asset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> 149452.pem in /etc/ssl/certs
	I0422 11:18:14.394130   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> /etc/ssl/certs/149452.pem
	I0422 11:18:14.394277   33971 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 11:18:14.405520   33971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem --> /etc/ssl/certs/149452.pem (1708 bytes)
	I0422 11:18:14.433983   33971 start.go:296] duration metric: took 131.157052ms for postStartSetup
	I0422 11:18:14.434029   33971 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:18:14.434327   33971 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0422 11:18:14.434359   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:18:14.437083   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:18:14.437505   33971 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:18:14.437528   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:18:14.437668   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:18:14.437865   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:18:14.438028   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:18:14.438227   33971 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	W0422 11:18:14.519785   33971 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0422 11:18:14.519805   33971 fix.go:56] duration metric: took 1m31.941298972s for fixHost
	I0422 11:18:14.519829   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:18:14.522443   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:18:14.522780   33971 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:18:14.522807   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:18:14.522914   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:18:14.523197   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:18:14.523397   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:18:14.523599   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:18:14.523828   33971 main.go:141] libmachine: Using SSH client type: native
	I0422 11:18:14.524030   33971 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0422 11:18:14.524044   33971 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 11:18:14.626381   33971 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713784694.594145142
	
	I0422 11:18:14.626407   33971 fix.go:216] guest clock: 1713784694.594145142
	I0422 11:18:14.626418   33971 fix.go:229] Guest: 2024-04-22 11:18:14.594145142 +0000 UTC Remote: 2024-04-22 11:18:14.519813701 +0000 UTC m=+92.082745768 (delta=74.331441ms)
	I0422 11:18:14.626443   33971 fix.go:200] guest clock delta is within tolerance: 74.331441ms
	I0422 11:18:14.626448   33971 start.go:83] releasing machines lock for "ha-821265", held for 1m32.047956729s
	I0422 11:18:14.626469   33971 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:18:14.626768   33971 main.go:141] libmachine: (ha-821265) Calling .GetIP
	I0422 11:18:14.629489   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:18:14.629939   33971 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:18:14.629963   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:18:14.630136   33971 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:18:14.630646   33971 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:18:14.630813   33971 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:18:14.630894   33971 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 11:18:14.630946   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:18:14.631020   33971 ssh_runner.go:195] Run: cat /version.json
	I0422 11:18:14.631050   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:18:14.633208   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:18:14.633551   33971 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:18:14.633587   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:18:14.633604   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:18:14.633732   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:18:14.633879   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:18:14.634024   33971 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:18:14.634030   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:18:14.634046   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:18:14.634165   33971 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:18:14.634200   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:18:14.634328   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:18:14.634465   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:18:14.634628   33971 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:18:14.710759   33971 ssh_runner.go:195] Run: systemctl --version
	I0422 11:18:14.742872   33971 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 11:18:14.916988   33971 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 11:18:14.924012   33971 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 11:18:14.924080   33971 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 11:18:14.935584   33971 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0422 11:18:14.935610   33971 start.go:494] detecting cgroup driver to use...
	I0422 11:18:14.935680   33971 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 11:18:14.954977   33971 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 11:18:14.970144   33971 docker.go:217] disabling cri-docker service (if available) ...
	I0422 11:18:14.970210   33971 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 11:18:14.985826   33971 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 11:18:15.001395   33971 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 11:18:15.160016   33971 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 11:18:15.315939   33971 docker.go:233] disabling docker service ...
	I0422 11:18:15.316014   33971 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 11:18:15.334316   33971 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 11:18:15.349873   33971 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 11:18:15.508685   33971 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 11:18:15.675710   33971 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 11:18:15.692132   33971 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 11:18:15.714162   33971 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 11:18:15.714238   33971 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:18:15.728130   33971 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 11:18:15.728185   33971 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:18:15.740896   33971 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:18:15.753390   33971 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:18:15.765705   33971 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 11:18:15.778230   33971 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:18:15.790526   33971 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:18:15.802999   33971 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:18:15.816471   33971 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 11:18:15.827654   33971 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 11:18:15.838665   33971 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 11:18:15.984506   33971 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 11:18:16.534017   33971 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 11:18:16.534105   33971 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 11:18:16.539635   33971 start.go:562] Will wait 60s for crictl version
	I0422 11:18:16.539698   33971 ssh_runner.go:195] Run: which crictl
	I0422 11:18:16.544033   33971 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 11:18:16.590935   33971 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 11:18:16.591011   33971 ssh_runner.go:195] Run: crio --version
	I0422 11:18:16.625412   33971 ssh_runner.go:195] Run: crio --version
	I0422 11:18:16.660489   33971 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 11:18:16.661937   33971 main.go:141] libmachine: (ha-821265) Calling .GetIP
	I0422 11:18:16.664423   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:18:16.664733   33971 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:18:16.664759   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:18:16.664938   33971 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0422 11:18:16.670314   33971 kubeadm.go:877] updating cluster {Name:ha-821265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-821265 Namespace:
default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.252 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvis
or:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 11:18:16.670456   33971 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 11:18:16.670506   33971 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 11:18:16.717049   33971 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 11:18:16.717074   33971 crio.go:433] Images already preloaded, skipping extraction
	I0422 11:18:16.717119   33971 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 11:18:16.755351   33971 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 11:18:16.755370   33971 cache_images.go:84] Images are preloaded, skipping loading
	I0422 11:18:16.755378   33971 kubeadm.go:928] updating node { 192.168.39.150 8443 v1.30.0 crio true true} ...
	I0422 11:18:16.755497   33971 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-821265 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-821265 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 11:18:16.755584   33971 ssh_runner.go:195] Run: crio config
	I0422 11:18:16.809629   33971 cni.go:84] Creating CNI manager for ""
	I0422 11:18:16.809649   33971 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0422 11:18:16.809661   33971 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 11:18:16.809680   33971 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.150 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-821265 NodeName:ha-821265 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 11:18:16.809809   33971 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.150
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-821265"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.150
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.150"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 11:18:16.809831   33971 kube-vip.go:111] generating kube-vip config ...
	I0422 11:18:16.809879   33971 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0422 11:18:16.823240   33971 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0422 11:18:16.823376   33971 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0422 11:18:16.823439   33971 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 11:18:16.834405   33971 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 11:18:16.834466   33971 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0422 11:18:16.846033   33971 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0422 11:18:16.867185   33971 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 11:18:16.886124   33971 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0422 11:18:16.904301   33971 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0422 11:18:16.922003   33971 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0422 11:18:16.927344   33971 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 11:18:17.079485   33971 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 11:18:17.095020   33971 certs.go:68] Setting up /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265 for IP: 192.168.39.150
	I0422 11:18:17.095042   33971 certs.go:194] generating shared ca certs ...
	I0422 11:18:17.095056   33971 certs.go:226] acquiring lock for ca certs: {Name:mk0b77082b88c771d0b00be5267ca31dfee6f85a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:18:17.095195   33971 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key
	I0422 11:18:17.095232   33971 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key
	I0422 11:18:17.095248   33971 certs.go:256] generating profile certs ...
	I0422 11:18:17.095322   33971 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/client.key
	I0422 11:18:17.095347   33971 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key.c2a57ae6
	I0422 11:18:17.095362   33971 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt.c2a57ae6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.150 192.168.39.39 192.168.39.95 192.168.39.254]
	I0422 11:18:17.297368   33971 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt.c2a57ae6 ...
	I0422 11:18:17.297397   33971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt.c2a57ae6: {Name:mk329652d53ceaf163cc9215e6e3102215ab0232 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:18:17.297562   33971 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key.c2a57ae6 ...
	I0422 11:18:17.297573   33971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key.c2a57ae6: {Name:mkd9033c2a3f5e2f4d691d0dc3d49c9b8162a362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:18:17.297643   33971 certs.go:381] copying /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt.c2a57ae6 -> /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt
	I0422 11:18:17.297775   33971 certs.go:385] copying /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key.c2a57ae6 -> /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key
	I0422 11:18:17.297911   33971 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.key
	I0422 11:18:17.297930   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0422 11:18:17.297942   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0422 11:18:17.297955   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0422 11:18:17.297968   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0422 11:18:17.297980   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0422 11:18:17.297991   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0422 11:18:17.298003   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0422 11:18:17.298015   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0422 11:18:17.298061   33971 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem (1338 bytes)
	W0422 11:18:17.298092   33971 certs.go:480] ignoring /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945_empty.pem, impossibly tiny 0 bytes
	I0422 11:18:17.298101   33971 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem (1679 bytes)
	I0422 11:18:17.298122   33971 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem (1078 bytes)
	I0422 11:18:17.298142   33971 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem (1123 bytes)
	I0422 11:18:17.298163   33971 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem (1679 bytes)
	I0422 11:18:17.298200   33971 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem (1708 bytes)
	I0422 11:18:17.298235   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> /usr/share/ca-certificates/149452.pem
	I0422 11:18:17.298248   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:18:17.298267   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem -> /usr/share/ca-certificates/14945.pem
	I0422 11:18:17.298840   33971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 11:18:17.399398   33971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 11:18:17.468177   33971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 11:18:17.514170   33971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0422 11:18:17.559462   33971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0422 11:18:17.587362   33971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0422 11:18:17.616099   33971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 11:18:17.657815   33971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 11:18:17.684678   33971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem --> /usr/share/ca-certificates/149452.pem (1708 bytes)
	I0422 11:18:17.710672   33971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 11:18:17.739306   33971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem --> /usr/share/ca-certificates/14945.pem (1338 bytes)
	I0422 11:18:17.766073   33971 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 11:18:17.784112   33971 ssh_runner.go:195] Run: openssl version
	I0422 11:18:17.790510   33971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149452.pem && ln -fs /usr/share/ca-certificates/149452.pem /etc/ssl/certs/149452.pem"
	I0422 11:18:17.802712   33971 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149452.pem
	I0422 11:18:17.807704   33971 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 10:51 /usr/share/ca-certificates/149452.pem
	I0422 11:18:17.807762   33971 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149452.pem
	I0422 11:18:17.814073   33971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149452.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 11:18:17.825154   33971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 11:18:17.836933   33971 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:18:17.841859   33971 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:18:17.841938   33971 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:18:17.848643   33971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 11:18:17.859376   33971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14945.pem && ln -fs /usr/share/ca-certificates/14945.pem /etc/ssl/certs/14945.pem"
	I0422 11:18:17.870936   33971 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14945.pem
	I0422 11:18:17.875705   33971 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 10:51 /usr/share/ca-certificates/14945.pem
	I0422 11:18:17.875761   33971 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14945.pem
	I0422 11:18:17.881961   33971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14945.pem /etc/ssl/certs/51391683.0"
	I0422 11:18:17.892055   33971 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 11:18:17.896890   33971 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 11:18:17.903057   33971 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 11:18:17.909049   33971 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 11:18:17.915315   33971 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 11:18:17.921555   33971 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 11:18:17.927605   33971 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 11:18:17.933764   33971 kubeadm.go:391] StartCluster: {Name:ha-821265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-821265 Namespace:def
ault APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.252 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:
false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 11:18:17.933890   33971 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 11:18:17.933926   33971 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 11:18:17.973785   33971 cri.go:89] found id: "16ef56225fa557ba27676ea985b488c3ee74c6c8596475b369b680ef8452686c"
	I0422 11:18:17.973806   33971 cri.go:89] found id: "c35de5462c21abea81ffc8d36f5be3ac560f53ea35d05d46cef598052731c89e"
	I0422 11:18:17.973810   33971 cri.go:89] found id: "38fd57ab261cd8c0d18f36cf8e96372b4bc8bd7a5e3a2fecb4c1e18f64b434a9"
	I0422 11:18:17.973813   33971 cri.go:89] found id: "1998bef851f9a842f606af6c4dfadb36bac1aecddb6b3799e3f13edb7f1acf58"
	I0422 11:18:17.973816   33971 cri.go:89] found id: "03c93b733e9d824b355dd41ee07faefa7e1f8b2a4f452bb053f1a9edd8d4106f"
	I0422 11:18:17.973819   33971 cri.go:89] found id: "28dbe3373b660061d706be023eb9515318a583f0f1fb735faf35e2fffc13f391"
	I0422 11:18:17.973821   33971 cri.go:89] found id: "609e2855f754ce317a2cad749d16f773349ffc248725fb9417b473c9be8df139"
	I0422 11:18:17.973824   33971 cri.go:89] found id: "1f43ea569f86c6da0221ad11b84975f3fab68f9999821a395b05d1afb49f5269"
	I0422 11:18:17.973826   33971 cri.go:89] found id: "a26ec191f8bcbef49468ef3d9b903de2da840c90478ee97540859b8f37f581f1"
	I0422 11:18:17.973834   33971 cri.go:89] found id: "2b3935bd9c893d367d1a96bbfe83c0b1d50125bfe3272fe02c29b282d1d35de5"
	I0422 11:18:17.973838   33971 cri.go:89] found id: "652741477fa90fca19fc111b1191a6acd0e2edcee141e389e5fd84f6018ec38e"
	I0422 11:18:17.973840   33971 cri.go:89] found id: "7cbf52d94248bdbe7ca0e2622c441a457f4747f2d8e8969d25f7b6e629e1b566"
	I0422 11:18:17.973843   33971 cri.go:89] found id: "ba49f85435f20c5e5f43a285a09c2344ab0eb98efa1efcef40308fec44a77803"
	I0422 11:18:17.973845   33971 cri.go:89] found id: ""
	I0422 11:18:17.973882   33971 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 22 11:20:47 ha-821265 crio[3844]: time="2024-04-22 11:20:47.791327354Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bfff2c06-8551-4cb2-9766-cd617b78e01f name=/runtime.v1.RuntimeService/Version
	Apr 22 11:20:47 ha-821265 crio[3844]: time="2024-04-22 11:20:47.792762995Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=070df3ba-92b2-4881-8dfa-1904c62d2685 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:20:47 ha-821265 crio[3844]: time="2024-04-22 11:20:47.793444557Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713784847793415736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=070df3ba-92b2-4881-8dfa-1904c62d2685 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:20:47 ha-821265 crio[3844]: time="2024-04-22 11:20:47.796275482Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8e3c2b2b-1e36-4072-a147-b0db2c592d17 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:20:47 ha-821265 crio[3844]: time="2024-04-22 11:20:47.796339414Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8e3c2b2b-1e36-4072-a147-b0db2c592d17 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:20:47 ha-821265 crio[3844]: time="2024-04-22 11:20:47.796844306Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdbfadb4ed8d19096021578583930566b38bffb62ac75a0fa4bfa1854bc51c07,PodSandboxId:5718ac2f010731d932225775ad8b53843e8a598c210cccfd78b15fcdc08bebc4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713784773618457331,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qbq9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751a17f-e26b-4ba8-81ce-077103c0aa1c,},Annotations:map[string]string{io.kubernetes.container.hash: f5b26a55,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd7aebece9906bc4053b906e1ecb267481a893fb7ff00bed5de74ed1cfa54000,PodSandboxId:d7028b8f29863bad3892327487b14f468d4c9cb4a5469a9b4b34ae91148f52c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713784762600202240,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b44da93-f3fa-49b7-a701-5ab7a430374f,},Annotations:map[string]string{io.kubernetes.container.hash: 97b65705,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06935d6ef805f3d3cf6a05bcb64dc081d72aae88db019d142b68750d3cf1c867,PodSandboxId:c83e50830d3f4db878e319b1cce7bf9160c76a605bbddbb15186b52a363c346f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713784741599078298,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7e7ddac3eb004675c7add1d1e064dc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f9805a7cceb20ebe5dd98c40f4989b29929823f101fc5fa3e52ce922be823cf,PodSandboxId:3bce0f832a4b7c43c1a8d39bf39eebbaf7ef2c41958b8711e7cf2f48c892aa3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713784734596266351,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2b58b303a812e19616ac42b0b60aae,},Annotations:map[string]string{io.kubernetes.container.hash: 4b37b5d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef4332ea1f047dc3cfb415e4e3f6e85cf33465b2ef9f18f7951278d2479ca93,PodSandboxId:be2b7bbc0a977b699d86984cd22eb583aada9b085c8a1907e359c7b60a8b31c4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713784732974770708,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b4r5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1670d513-9071-4ee0-ae1b-7600c98019b8,},Annotations:map[string]string{io.kubernetes.container.hash: 5113ac6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e84fb7087c9854ea4137c3862fe9dc600a9d3f90b3fbc9522a51e51e681a08e1,PodSandboxId:0fdc24e0bdf40a907df680923387fa4412d3fe58200f36e9c854f25f4915fb23,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713784715707288035,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fd9482ff289c9d12747129b48272b7a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:d3cbf7c282792930e1df477971a4bd28b78cb49c295f6e0ac2c8a454824de5d2,PodSandboxId:d7028b8f29863bad3892327487b14f468d4c9cb4a5469a9b4b34ae91148f52c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713784699818708388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b44da93-f3fa-49b7-a701-5ab7a430374f,},Annotations:map[string]string{io.kubernetes.container.hash: 97b65705,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:45ee3a04fea005538c47381509fdf3d9e53cfe0bb8e8e14149e912ea8a67cfd8,PodSandboxId:e92faf278f88298e80a1a96b654d4a23ef26ee537f98d7b37fa7e8ecaaaf94c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713784699723405687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7r9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a4f7fc-5ce0-4d77-b30f-9d39cded457c,},Annotations:map[string]string{io.kubernetes.container.hash: bf585fa1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aba046555
40d31d9ebf03299c794d1c3c3623ea9026908acaac007ac12a740b4,PodSandboxId:2800fa5fa268e57b67df341230522c1d69a07af4049cd86a97df3ceb3abff22c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713784700024922806,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ft2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e14815-b8e9-4b60-9b2c-c7d86cccb594,},Annotations:map[string]string{io.kubernetes.container.hash: f5070328,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:086db7b19ea3bed15f2bf46fc53e6befb389e2aa6d163eb0290b45841b20a974,PodSandboxId:c1f81420600b023ee32a3073be284b93a2cbc2919d2092900ade645f006f514f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713784699657509914,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d47cc377f7ae04e53a8145721f1411a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb1b67b39ae4f69aabbcde5efda711e2ad29a9f6926b4f1e78b54d9fcd92ed97,PodSandboxId:3bce0f832a4b7c43c1a8d39bf39eebbaf7ef2c41958b8711e7cf2f48c892aa3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713784699622280749,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2b58b303a812e19616ac42b0b60aae,},Annotations:map[string]string{io.kubernetes.container.hash: 4b37b5d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d27ec30a0ad7913b7b1f7c2670ac92c8e3c79a52b67ccae30cf41067898e375c,PodSandboxId:e1d9c3b2c209a79ca019d0c6b4e1e1d23a1390aed1ee7374d58e639804d6cc5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713784699637480543,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68bde0d14316a4c3a901fddeacfd54a,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9fd198,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594b38d4c919f0bd4386634ffa22d99b282bd1d1e2d832d8b64e67b021e866d4,PodSandboxId:c83e50830d3f4db878e319b1cce7bf9160c76a605bbddbb15186b52a363c346f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713784699510289451,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7e7ddac3eb004675c7add1d1e064dc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b77e388cf4a08ba51bd96cd087b8b7ef6a23d957a16f1deb5ca55943ffe9f4,PodSandboxId:5718ac2f010731d932225775ad8b53843e8a598c210cccfd78b15fcdc08bebc4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713784699059530582,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qbq9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751a17f-e26b-4ba8-81ce-077103c0aa1c,},Annotations:map[string]string{io.kubernetes.container.hash: f5b26a55,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bec52a480e1ba363d05d69d01be4a0cee8746d920f0865e277f9c21bc87cbe3,PodSandboxId:97870b1b56dc1a85e2557bc8e33b02398db33a96e36cc229f4632c06816d7196,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713784698992367850,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ht7jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c404a830-ddce-4c49-9e54-05d45871b4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1d0fb98b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\"
:9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9e45e23c690bb79c7fd65070b3188b60b1c0041e0955b10386851453d93e8c2,PodSandboxId:82d54024bc68a08eee3c2cc0b18e7fb33cd099191b5f7459c47109f97a3f7592,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713784211175334269,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b4r5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1670d513-9071-4ee0-ae1b-7600c98019b8,},Annotations:map[string]string{io.kubernete
s.container.hash: 5113ac6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28dbe3373b660061d706be023eb9515318a583f0f1fb735faf35e2fffc13f391,PodSandboxId:126db08ea55aca85342e8b7f3c944b3e420d06d55410be6b5b8c83ed8aaea027,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713784060436950826,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ht7jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c404a830-ddce-4c49-9e54-05d45871b4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1d0fb98b,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:609e2855f754ce317a2cad749d16f773349ffc248725fb9417b473c9be8df139,PodSandboxId:84aaf42f76a8a064784395ee92d65a6be9d6ddc96fb911530ab4ab1c12faefa1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713784060349985507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-ft2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e14815-b8e9-4b60-9b2c-c7d86cccb594,},Annotations:map[string]string{io.kubernetes.container.hash: f5070328,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f43ea569f86c6da0221ad11b84975f3fab68f9999821a395b05d1afb49f5269,PodSandboxId:626e64c737b2d764452e83cdf097ca6fc3248d79c58ccd5a488c8986fdfb101d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713784057949970963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7r9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a4f7fc-5ce0-4d77-b30f-9d39cded457c,},Annotations:map[string]string{io.kubernetes.container.hash: bf585fa1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3935bd9c893d367d1a96bbfe83c0b1d50125bfe3272fe02c29b282d1d35de5,PodSandboxId:68a372e9f954bec85212f490bbd41d4da504f0947a8f1e065b8dc63d7cf5db88,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8
b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713784035610721822,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d47cc377f7ae04e53a8145721f1411a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49f85435f20c5e5f43a285a09c2344ab0eb98efa1efcef40308fec44a77803,PodSandboxId:f773251009c17f15bd2065d44e9976fe2579a48750872b77f082f3b37a1a5747,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1713784035389376110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68bde0d14316a4c3a901fddeacfd54a,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9fd198,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8e3c2b2b-1e36-4072-a147-b0db2c592d17 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:20:47 ha-821265 crio[3844]: time="2024-04-22 11:20:47.869376480Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=88f6f780-0060-48ea-aeb5-0e119a518518 name=/runtime.v1.RuntimeService/Version
	Apr 22 11:20:47 ha-821265 crio[3844]: time="2024-04-22 11:20:47.869480596Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=88f6f780-0060-48ea-aeb5-0e119a518518 name=/runtime.v1.RuntimeService/Version
	Apr 22 11:20:47 ha-821265 crio[3844]: time="2024-04-22 11:20:47.873060264Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=814b55cb-44d8-451a-9536-100258a1b3d4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:20:47 ha-821265 crio[3844]: time="2024-04-22 11:20:47.873776746Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713784847873746077,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=814b55cb-44d8-451a-9536-100258a1b3d4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:20:47 ha-821265 crio[3844]: time="2024-04-22 11:20:47.875897053Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eba4b13a-a1cb-42cf-aef8-f7dd32023915 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:20:47 ha-821265 crio[3844]: time="2024-04-22 11:20:47.875970237Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eba4b13a-a1cb-42cf-aef8-f7dd32023915 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:20:47 ha-821265 crio[3844]: time="2024-04-22 11:20:47.876376715Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdbfadb4ed8d19096021578583930566b38bffb62ac75a0fa4bfa1854bc51c07,PodSandboxId:5718ac2f010731d932225775ad8b53843e8a598c210cccfd78b15fcdc08bebc4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713784773618457331,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qbq9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751a17f-e26b-4ba8-81ce-077103c0aa1c,},Annotations:map[string]string{io.kubernetes.container.hash: f5b26a55,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd7aebece9906bc4053b906e1ecb267481a893fb7ff00bed5de74ed1cfa54000,PodSandboxId:d7028b8f29863bad3892327487b14f468d4c9cb4a5469a9b4b34ae91148f52c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713784762600202240,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b44da93-f3fa-49b7-a701-5ab7a430374f,},Annotations:map[string]string{io.kubernetes.container.hash: 97b65705,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06935d6ef805f3d3cf6a05bcb64dc081d72aae88db019d142b68750d3cf1c867,PodSandboxId:c83e50830d3f4db878e319b1cce7bf9160c76a605bbddbb15186b52a363c346f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713784741599078298,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7e7ddac3eb004675c7add1d1e064dc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f9805a7cceb20ebe5dd98c40f4989b29929823f101fc5fa3e52ce922be823cf,PodSandboxId:3bce0f832a4b7c43c1a8d39bf39eebbaf7ef2c41958b8711e7cf2f48c892aa3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713784734596266351,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2b58b303a812e19616ac42b0b60aae,},Annotations:map[string]string{io.kubernetes.container.hash: 4b37b5d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef4332ea1f047dc3cfb415e4e3f6e85cf33465b2ef9f18f7951278d2479ca93,PodSandboxId:be2b7bbc0a977b699d86984cd22eb583aada9b085c8a1907e359c7b60a8b31c4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713784732974770708,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b4r5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1670d513-9071-4ee0-ae1b-7600c98019b8,},Annotations:map[string]string{io.kubernetes.container.hash: 5113ac6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e84fb7087c9854ea4137c3862fe9dc600a9d3f90b3fbc9522a51e51e681a08e1,PodSandboxId:0fdc24e0bdf40a907df680923387fa4412d3fe58200f36e9c854f25f4915fb23,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713784715707288035,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fd9482ff289c9d12747129b48272b7a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:d3cbf7c282792930e1df477971a4bd28b78cb49c295f6e0ac2c8a454824de5d2,PodSandboxId:d7028b8f29863bad3892327487b14f468d4c9cb4a5469a9b4b34ae91148f52c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713784699818708388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b44da93-f3fa-49b7-a701-5ab7a430374f,},Annotations:map[string]string{io.kubernetes.container.hash: 97b65705,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:45ee3a04fea005538c47381509fdf3d9e53cfe0bb8e8e14149e912ea8a67cfd8,PodSandboxId:e92faf278f88298e80a1a96b654d4a23ef26ee537f98d7b37fa7e8ecaaaf94c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713784699723405687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7r9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a4f7fc-5ce0-4d77-b30f-9d39cded457c,},Annotations:map[string]string{io.kubernetes.container.hash: bf585fa1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aba046555
40d31d9ebf03299c794d1c3c3623ea9026908acaac007ac12a740b4,PodSandboxId:2800fa5fa268e57b67df341230522c1d69a07af4049cd86a97df3ceb3abff22c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713784700024922806,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ft2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e14815-b8e9-4b60-9b2c-c7d86cccb594,},Annotations:map[string]string{io.kubernetes.container.hash: f5070328,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:086db7b19ea3bed15f2bf46fc53e6befb389e2aa6d163eb0290b45841b20a974,PodSandboxId:c1f81420600b023ee32a3073be284b93a2cbc2919d2092900ade645f006f514f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713784699657509914,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d47cc377f7ae04e53a8145721f1411a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb1b67b39ae4f69aabbcde5efda711e2ad29a9f6926b4f1e78b54d9fcd92ed97,PodSandboxId:3bce0f832a4b7c43c1a8d39bf39eebbaf7ef2c41958b8711e7cf2f48c892aa3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713784699622280749,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2b58b303a812e19616ac42b0b60aae,},Annotations:map[string]string{io.kubernetes.container.hash: 4b37b5d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d27ec30a0ad7913b7b1f7c2670ac92c8e3c79a52b67ccae30cf41067898e375c,PodSandboxId:e1d9c3b2c209a79ca019d0c6b4e1e1d23a1390aed1ee7374d58e639804d6cc5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713784699637480543,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68bde0d14316a4c3a901fddeacfd54a,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9fd198,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594b38d4c919f0bd4386634ffa22d99b282bd1d1e2d832d8b64e67b021e866d4,PodSandboxId:c83e50830d3f4db878e319b1cce7bf9160c76a605bbddbb15186b52a363c346f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713784699510289451,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7e7ddac3eb004675c7add1d1e064dc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b77e388cf4a08ba51bd96cd087b8b7ef6a23d957a16f1deb5ca55943ffe9f4,PodSandboxId:5718ac2f010731d932225775ad8b53843e8a598c210cccfd78b15fcdc08bebc4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713784699059530582,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qbq9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751a17f-e26b-4ba8-81ce-077103c0aa1c,},Annotations:map[string]string{io.kubernetes.container.hash: f5b26a55,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bec52a480e1ba363d05d69d01be4a0cee8746d920f0865e277f9c21bc87cbe3,PodSandboxId:97870b1b56dc1a85e2557bc8e33b02398db33a96e36cc229f4632c06816d7196,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713784698992367850,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ht7jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c404a830-ddce-4c49-9e54-05d45871b4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1d0fb98b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\"
:9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9e45e23c690bb79c7fd65070b3188b60b1c0041e0955b10386851453d93e8c2,PodSandboxId:82d54024bc68a08eee3c2cc0b18e7fb33cd099191b5f7459c47109f97a3f7592,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713784211175334269,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b4r5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1670d513-9071-4ee0-ae1b-7600c98019b8,},Annotations:map[string]string{io.kubernete
s.container.hash: 5113ac6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28dbe3373b660061d706be023eb9515318a583f0f1fb735faf35e2fffc13f391,PodSandboxId:126db08ea55aca85342e8b7f3c944b3e420d06d55410be6b5b8c83ed8aaea027,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713784060436950826,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ht7jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c404a830-ddce-4c49-9e54-05d45871b4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1d0fb98b,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:609e2855f754ce317a2cad749d16f773349ffc248725fb9417b473c9be8df139,PodSandboxId:84aaf42f76a8a064784395ee92d65a6be9d6ddc96fb911530ab4ab1c12faefa1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713784060349985507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-ft2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e14815-b8e9-4b60-9b2c-c7d86cccb594,},Annotations:map[string]string{io.kubernetes.container.hash: f5070328,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f43ea569f86c6da0221ad11b84975f3fab68f9999821a395b05d1afb49f5269,PodSandboxId:626e64c737b2d764452e83cdf097ca6fc3248d79c58ccd5a488c8986fdfb101d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713784057949970963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7r9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a4f7fc-5ce0-4d77-b30f-9d39cded457c,},Annotations:map[string]string{io.kubernetes.container.hash: bf585fa1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3935bd9c893d367d1a96bbfe83c0b1d50125bfe3272fe02c29b282d1d35de5,PodSandboxId:68a372e9f954bec85212f490bbd41d4da504f0947a8f1e065b8dc63d7cf5db88,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8
b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713784035610721822,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d47cc377f7ae04e53a8145721f1411a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49f85435f20c5e5f43a285a09c2344ab0eb98efa1efcef40308fec44a77803,PodSandboxId:f773251009c17f15bd2065d44e9976fe2579a48750872b77f082f3b37a1a5747,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1713784035389376110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68bde0d14316a4c3a901fddeacfd54a,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9fd198,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eba4b13a-a1cb-42cf-aef8-f7dd32023915 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:20:47 ha-821265 crio[3844]: time="2024-04-22 11:20:47.934470391Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b0efeec7-1dbd-4430-994c-1f1c0b91b8a0 name=/runtime.v1.RuntimeService/Version
	Apr 22 11:20:47 ha-821265 crio[3844]: time="2024-04-22 11:20:47.934611444Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b0efeec7-1dbd-4430-994c-1f1c0b91b8a0 name=/runtime.v1.RuntimeService/Version
	Apr 22 11:20:47 ha-821265 crio[3844]: time="2024-04-22 11:20:47.936292017Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=373dcd49-c098-463c-8b90-e3275b2a05c4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:20:47 ha-821265 crio[3844]: time="2024-04-22 11:20:47.936867634Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713784847936832746,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=373dcd49-c098-463c-8b90-e3275b2a05c4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:20:47 ha-821265 crio[3844]: time="2024-04-22 11:20:47.937796745Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d307d552-449c-467c-b902-a47f900a4e93 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:20:47 ha-821265 crio[3844]: time="2024-04-22 11:20:47.937880469Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d307d552-449c-467c-b902-a47f900a4e93 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:20:47 ha-821265 crio[3844]: time="2024-04-22 11:20:47.938393056Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdbfadb4ed8d19096021578583930566b38bffb62ac75a0fa4bfa1854bc51c07,PodSandboxId:5718ac2f010731d932225775ad8b53843e8a598c210cccfd78b15fcdc08bebc4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713784773618457331,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qbq9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751a17f-e26b-4ba8-81ce-077103c0aa1c,},Annotations:map[string]string{io.kubernetes.container.hash: f5b26a55,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd7aebece9906bc4053b906e1ecb267481a893fb7ff00bed5de74ed1cfa54000,PodSandboxId:d7028b8f29863bad3892327487b14f468d4c9cb4a5469a9b4b34ae91148f52c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713784762600202240,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b44da93-f3fa-49b7-a701-5ab7a430374f,},Annotations:map[string]string{io.kubernetes.container.hash: 97b65705,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06935d6ef805f3d3cf6a05bcb64dc081d72aae88db019d142b68750d3cf1c867,PodSandboxId:c83e50830d3f4db878e319b1cce7bf9160c76a605bbddbb15186b52a363c346f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713784741599078298,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7e7ddac3eb004675c7add1d1e064dc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f9805a7cceb20ebe5dd98c40f4989b29929823f101fc5fa3e52ce922be823cf,PodSandboxId:3bce0f832a4b7c43c1a8d39bf39eebbaf7ef2c41958b8711e7cf2f48c892aa3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713784734596266351,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2b58b303a812e19616ac42b0b60aae,},Annotations:map[string]string{io.kubernetes.container.hash: 4b37b5d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef4332ea1f047dc3cfb415e4e3f6e85cf33465b2ef9f18f7951278d2479ca93,PodSandboxId:be2b7bbc0a977b699d86984cd22eb583aada9b085c8a1907e359c7b60a8b31c4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713784732974770708,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b4r5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1670d513-9071-4ee0-ae1b-7600c98019b8,},Annotations:map[string]string{io.kubernetes.container.hash: 5113ac6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e84fb7087c9854ea4137c3862fe9dc600a9d3f90b3fbc9522a51e51e681a08e1,PodSandboxId:0fdc24e0bdf40a907df680923387fa4412d3fe58200f36e9c854f25f4915fb23,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713784715707288035,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fd9482ff289c9d12747129b48272b7a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:d3cbf7c282792930e1df477971a4bd28b78cb49c295f6e0ac2c8a454824de5d2,PodSandboxId:d7028b8f29863bad3892327487b14f468d4c9cb4a5469a9b4b34ae91148f52c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713784699818708388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b44da93-f3fa-49b7-a701-5ab7a430374f,},Annotations:map[string]string{io.kubernetes.container.hash: 97b65705,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:45ee3a04fea005538c47381509fdf3d9e53cfe0bb8e8e14149e912ea8a67cfd8,PodSandboxId:e92faf278f88298e80a1a96b654d4a23ef26ee537f98d7b37fa7e8ecaaaf94c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713784699723405687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7r9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a4f7fc-5ce0-4d77-b30f-9d39cded457c,},Annotations:map[string]string{io.kubernetes.container.hash: bf585fa1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aba046555
40d31d9ebf03299c794d1c3c3623ea9026908acaac007ac12a740b4,PodSandboxId:2800fa5fa268e57b67df341230522c1d69a07af4049cd86a97df3ceb3abff22c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713784700024922806,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ft2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e14815-b8e9-4b60-9b2c-c7d86cccb594,},Annotations:map[string]string{io.kubernetes.container.hash: f5070328,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:086db7b19ea3bed15f2bf46fc53e6befb389e2aa6d163eb0290b45841b20a974,PodSandboxId:c1f81420600b023ee32a3073be284b93a2cbc2919d2092900ade645f006f514f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713784699657509914,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d47cc377f7ae04e53a8145721f1411a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb1b67b39ae4f69aabbcde5efda711e2ad29a9f6926b4f1e78b54d9fcd92ed97,PodSandboxId:3bce0f832a4b7c43c1a8d39bf39eebbaf7ef2c41958b8711e7cf2f48c892aa3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713784699622280749,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2b58b303a812e19616ac42b0b60aae,},Annotations:map[string]string{io.kubernetes.container.hash: 4b37b5d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d27ec30a0ad7913b7b1f7c2670ac92c8e3c79a52b67ccae30cf41067898e375c,PodSandboxId:e1d9c3b2c209a79ca019d0c6b4e1e1d23a1390aed1ee7374d58e639804d6cc5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713784699637480543,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68bde0d14316a4c3a901fddeacfd54a,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9fd198,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594b38d4c919f0bd4386634ffa22d99b282bd1d1e2d832d8b64e67b021e866d4,PodSandboxId:c83e50830d3f4db878e319b1cce7bf9160c76a605bbddbb15186b52a363c346f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713784699510289451,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7e7ddac3eb004675c7add1d1e064dc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b77e388cf4a08ba51bd96cd087b8b7ef6a23d957a16f1deb5ca55943ffe9f4,PodSandboxId:5718ac2f010731d932225775ad8b53843e8a598c210cccfd78b15fcdc08bebc4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713784699059530582,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qbq9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751a17f-e26b-4ba8-81ce-077103c0aa1c,},Annotations:map[string]string{io.kubernetes.container.hash: f5b26a55,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bec52a480e1ba363d05d69d01be4a0cee8746d920f0865e277f9c21bc87cbe3,PodSandboxId:97870b1b56dc1a85e2557bc8e33b02398db33a96e36cc229f4632c06816d7196,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713784698992367850,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ht7jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c404a830-ddce-4c49-9e54-05d45871b4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1d0fb98b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\"
:9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9e45e23c690bb79c7fd65070b3188b60b1c0041e0955b10386851453d93e8c2,PodSandboxId:82d54024bc68a08eee3c2cc0b18e7fb33cd099191b5f7459c47109f97a3f7592,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713784211175334269,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b4r5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1670d513-9071-4ee0-ae1b-7600c98019b8,},Annotations:map[string]string{io.kubernete
s.container.hash: 5113ac6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28dbe3373b660061d706be023eb9515318a583f0f1fb735faf35e2fffc13f391,PodSandboxId:126db08ea55aca85342e8b7f3c944b3e420d06d55410be6b5b8c83ed8aaea027,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713784060436950826,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ht7jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c404a830-ddce-4c49-9e54-05d45871b4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1d0fb98b,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:609e2855f754ce317a2cad749d16f773349ffc248725fb9417b473c9be8df139,PodSandboxId:84aaf42f76a8a064784395ee92d65a6be9d6ddc96fb911530ab4ab1c12faefa1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713784060349985507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-ft2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e14815-b8e9-4b60-9b2c-c7d86cccb594,},Annotations:map[string]string{io.kubernetes.container.hash: f5070328,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f43ea569f86c6da0221ad11b84975f3fab68f9999821a395b05d1afb49f5269,PodSandboxId:626e64c737b2d764452e83cdf097ca6fc3248d79c58ccd5a488c8986fdfb101d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713784057949970963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7r9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a4f7fc-5ce0-4d77-b30f-9d39cded457c,},Annotations:map[string]string{io.kubernetes.container.hash: bf585fa1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3935bd9c893d367d1a96bbfe83c0b1d50125bfe3272fe02c29b282d1d35de5,PodSandboxId:68a372e9f954bec85212f490bbd41d4da504f0947a8f1e065b8dc63d7cf5db88,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8
b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713784035610721822,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d47cc377f7ae04e53a8145721f1411a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49f85435f20c5e5f43a285a09c2344ab0eb98efa1efcef40308fec44a77803,PodSandboxId:f773251009c17f15bd2065d44e9976fe2579a48750872b77f082f3b37a1a5747,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1713784035389376110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68bde0d14316a4c3a901fddeacfd54a,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9fd198,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d307d552-449c-467c-b902-a47f900a4e93 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:20:47 ha-821265 crio[3844]: time="2024-04-22 11:20:47.941909725Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=78745427-bbb1-4ee1-9930-432b8b5c50b0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 22 11:20:47 ha-821265 crio[3844]: time="2024-04-22 11:20:47.942270616Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:be2b7bbc0a977b699d86984cd22eb583aada9b085c8a1907e359c7b60a8b31c4,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-b4r5w,Uid:1670d513-9071-4ee0-ae1b-7600c98019b8,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713784732802991974,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-b4r5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1670d513-9071-4ee0-ae1b-7600c98019b8,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T11:10:08.190426540Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0fdc24e0bdf40a907df680923387fa4412d3fe58200f36e9c854f25f4915fb23,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-821265,Uid:7fd9482ff289c9d12747129b48272b7a,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1713784715590322992,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fd9482ff289c9d12747129b48272b7a,},Annotations:map[string]string{kubernetes.io/config.hash: 7fd9482ff289c9d12747129b48272b7a,kubernetes.io/config.seen: 2024-04-22T11:18:16.891309927Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2800fa5fa268e57b67df341230522c1d69a07af4049cd86a97df3ceb3abff22c,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-ft2jl,Uid:09e14815-b8e9-4b60-9b2c-c7d86cccb594,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713784699166450704,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-ft2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e14815-b8e9-4b60-9b2c-c7d86cccb594,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04
-22T11:07:39.776188091Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d7028b8f29863bad3892327487b14f468d4c9cb4a5469a9b4b34ae91148f52c9,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:4b44da93-f3fa-49b7-a701-5ab7a430374f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713784699055470775,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b44da93-f3fa-49b7-a701-5ab7a430374f,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":
\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-22T11:07:39.772838141Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3bce0f832a4b7c43c1a8d39bf39eebbaf7ef2c41958b8711e7cf2f48c892aa3e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-821265,Uid:0b2b58b303a812e19616ac42b0b60aae,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713784699032192467,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2b58b303a812e19616ac42b0b60aae,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-ap
iserver.advertise-address.endpoint: 192.168.39.150:8443,kubernetes.io/config.hash: 0b2b58b303a812e19616ac42b0b60aae,kubernetes.io/config.seen: 2024-04-22T11:07:21.530072513Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c83e50830d3f4db878e319b1cce7bf9160c76a605bbddbb15186b52a363c346f,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-821265,Uid:6e7e7ddac3eb004675c7add1d1e064dc,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713784699016221538,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7e7ddac3eb004675c7add1d1e064dc,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6e7e7ddac3eb004675c7add1d1e064dc,kubernetes.io/config.seen: 2024-04-22T11:07:21.530073935Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c1f81420600b023ee32a3073be284b93a2cbc29
19d2092900ade645f006f514f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-821265,Uid:0d47cc377f7ae04e53a8145721f1411a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713784698987876069,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d47cc377f7ae04e53a8145721f1411a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0d47cc377f7ae04e53a8145721f1411a,kubernetes.io/config.seen: 2024-04-22T11:07:21.530074971Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e92faf278f88298e80a1a96b654d4a23ef26ee537f98d7b37fa7e8ecaaaf94c9,Metadata:&PodSandboxMetadata{Name:kube-proxy-w7r9d,Uid:56a4f7fc-5ce0-4d77-b30f-9d39cded457c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713784698982693264,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernet
es.pod.name: kube-proxy-w7r9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a4f7fc-5ce0-4d77-b30f-9d39cded457c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T11:07:37.297402525Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e1d9c3b2c209a79ca019d0c6b4e1e1d23a1390aed1ee7374d58e639804d6cc5d,Metadata:&PodSandboxMetadata{Name:etcd-ha-821265,Uid:b68bde0d14316a4c3a901fddeacfd54a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713784698962318904,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68bde0d14316a4c3a901fddeacfd54a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.150:2379,kubernetes.io/config.hash: b68bde0d14316a4c3a901fddeacfd54a,kubernetes.io/config.seen: 2024-04-22T11:07:21.53006
7950Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:97870b1b56dc1a85e2557bc8e33b02398db33a96e36cc229f4632c06816d7196,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-ht7jl,Uid:c404a830-ddce-4c49-9e54-05d45871b4b0,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713784697325790987,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-ht7jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c404a830-ddce-4c49-9e54-05d45871b4b0,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T11:07:39.765172392Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5718ac2f010731d932225775ad8b53843e8a598c210cccfd78b15fcdc08bebc4,Metadata:&PodSandboxMetadata{Name:kindnet-qbq9z,Uid:9751a17f-e26b-4ba8-81ce-077103c0aa1c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713784697282459317,Labels:map[string]string{app: kindnet,controlle
r-revision-hash: 64fdfd5c6d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-qbq9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751a17f-e26b-4ba8-81ce-077103c0aa1c,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T11:07:37.284615771Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:82d54024bc68a08eee3c2cc0b18e7fb33cd099191b5f7459c47109f97a3f7592,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-b4r5w,Uid:1670d513-9071-4ee0-ae1b-7600c98019b8,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713784208510394123,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-b4r5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1670d513-9071-4ee0-ae1b-7600c98019b8,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T11:10:08.190426540Z,kubernetes.io/config.source
: api,},RuntimeHandler:,},&PodSandbox{Id:84aaf42f76a8a064784395ee92d65a6be9d6ddc96fb911530ab4ab1c12faefa1,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-ft2jl,Uid:09e14815-b8e9-4b60-9b2c-c7d86cccb594,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713784060091226452,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-ft2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e14815-b8e9-4b60-9b2c-c7d86cccb594,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T11:07:39.776188091Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:126db08ea55aca85342e8b7f3c944b3e420d06d55410be6b5b8c83ed8aaea027,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-ht7jl,Uid:c404a830-ddce-4c49-9e54-05d45871b4b0,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713784060074349957,Labels:map[string]string{io.kubernetes.container.name: POD,io.ku
bernetes.pod.name: coredns-7db6d8ff4d-ht7jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c404a830-ddce-4c49-9e54-05d45871b4b0,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T11:07:39.765172392Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:626e64c737b2d764452e83cdf097ca6fc3248d79c58ccd5a488c8986fdfb101d,Metadata:&PodSandboxMetadata{Name:kube-proxy-w7r9d,Uid:56a4f7fc-5ce0-4d77-b30f-9d39cded457c,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713784057626461096,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-w7r9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a4f7fc-5ce0-4d77-b30f-9d39cded457c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T11:07:37.297402525Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&
PodSandbox{Id:68a372e9f954bec85212f490bbd41d4da504f0947a8f1e065b8dc63d7cf5db88,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-821265,Uid:0d47cc377f7ae04e53a8145721f1411a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713784035200313592,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d47cc377f7ae04e53a8145721f1411a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0d47cc377f7ae04e53a8145721f1411a,kubernetes.io/config.seen: 2024-04-22T11:07:14.725822574Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f773251009c17f15bd2065d44e9976fe2579a48750872b77f082f3b37a1a5747,Metadata:&PodSandboxMetadata{Name:etcd-ha-821265,Uid:b68bde0d14316a4c3a901fddeacfd54a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713784035196509363,Labels:map[string]string{component: etcd,io.kubernetes
.container.name: POD,io.kubernetes.pod.name: etcd-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68bde0d14316a4c3a901fddeacfd54a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.150:2379,kubernetes.io/config.hash: b68bde0d14316a4c3a901fddeacfd54a,kubernetes.io/config.seen: 2024-04-22T11:07:14.725816107Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=78745427-bbb1-4ee1-9930-432b8b5c50b0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 22 11:20:47 ha-821265 crio[3844]: time="2024-04-22 11:20:47.943531048Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8f84ecfb-9443-4974-b2f5-d361c15246eb name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:20:47 ha-821265 crio[3844]: time="2024-04-22 11:20:47.943714405Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8f84ecfb-9443-4974-b2f5-d361c15246eb name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:20:47 ha-821265 crio[3844]: time="2024-04-22 11:20:47.945789662Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdbfadb4ed8d19096021578583930566b38bffb62ac75a0fa4bfa1854bc51c07,PodSandboxId:5718ac2f010731d932225775ad8b53843e8a598c210cccfd78b15fcdc08bebc4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713784773618457331,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qbq9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751a17f-e26b-4ba8-81ce-077103c0aa1c,},Annotations:map[string]string{io.kubernetes.container.hash: f5b26a55,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd7aebece9906bc4053b906e1ecb267481a893fb7ff00bed5de74ed1cfa54000,PodSandboxId:d7028b8f29863bad3892327487b14f468d4c9cb4a5469a9b4b34ae91148f52c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713784762600202240,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b44da93-f3fa-49b7-a701-5ab7a430374f,},Annotations:map[string]string{io.kubernetes.container.hash: 97b65705,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06935d6ef805f3d3cf6a05bcb64dc081d72aae88db019d142b68750d3cf1c867,PodSandboxId:c83e50830d3f4db878e319b1cce7bf9160c76a605bbddbb15186b52a363c346f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713784741599078298,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7e7ddac3eb004675c7add1d1e064dc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f9805a7cceb20ebe5dd98c40f4989b29929823f101fc5fa3e52ce922be823cf,PodSandboxId:3bce0f832a4b7c43c1a8d39bf39eebbaf7ef2c41958b8711e7cf2f48c892aa3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713784734596266351,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2b58b303a812e19616ac42b0b60aae,},Annotations:map[string]string{io.kubernetes.container.hash: 4b37b5d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef4332ea1f047dc3cfb415e4e3f6e85cf33465b2ef9f18f7951278d2479ca93,PodSandboxId:be2b7bbc0a977b699d86984cd22eb583aada9b085c8a1907e359c7b60a8b31c4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713784732974770708,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b4r5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1670d513-9071-4ee0-ae1b-7600c98019b8,},Annotations:map[string]string{io.kubernetes.container.hash: 5113ac6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e84fb7087c9854ea4137c3862fe9dc600a9d3f90b3fbc9522a51e51e681a08e1,PodSandboxId:0fdc24e0bdf40a907df680923387fa4412d3fe58200f36e9c854f25f4915fb23,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713784715707288035,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fd9482ff289c9d12747129b48272b7a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:d3cbf7c282792930e1df477971a4bd28b78cb49c295f6e0ac2c8a454824de5d2,PodSandboxId:d7028b8f29863bad3892327487b14f468d4c9cb4a5469a9b4b34ae91148f52c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713784699818708388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b44da93-f3fa-49b7-a701-5ab7a430374f,},Annotations:map[string]string{io.kubernetes.container.hash: 97b65705,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:45ee3a04fea005538c47381509fdf3d9e53cfe0bb8e8e14149e912ea8a67cfd8,PodSandboxId:e92faf278f88298e80a1a96b654d4a23ef26ee537f98d7b37fa7e8ecaaaf94c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713784699723405687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7r9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a4f7fc-5ce0-4d77-b30f-9d39cded457c,},Annotations:map[string]string{io.kubernetes.container.hash: bf585fa1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aba046555
40d31d9ebf03299c794d1c3c3623ea9026908acaac007ac12a740b4,PodSandboxId:2800fa5fa268e57b67df341230522c1d69a07af4049cd86a97df3ceb3abff22c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713784700024922806,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ft2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e14815-b8e9-4b60-9b2c-c7d86cccb594,},Annotations:map[string]string{io.kubernetes.container.hash: f5070328,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:086db7b19ea3bed15f2bf46fc53e6befb389e2aa6d163eb0290b45841b20a974,PodSandboxId:c1f81420600b023ee32a3073be284b93a2cbc2919d2092900ade645f006f514f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713784699657509914,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d47cc377f7ae04e53a8145721f1411a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb1b67b39ae4f69aabbcde5efda711e2ad29a9f6926b4f1e78b54d9fcd92ed97,PodSandboxId:3bce0f832a4b7c43c1a8d39bf39eebbaf7ef2c41958b8711e7cf2f48c892aa3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713784699622280749,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2b58b303a812e19616ac42b0b60aae,},Annotations:map[string]string{io.kubernetes.container.hash: 4b37b5d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d27ec30a0ad7913b7b1f7c2670ac92c8e3c79a52b67ccae30cf41067898e375c,PodSandboxId:e1d9c3b2c209a79ca019d0c6b4e1e1d23a1390aed1ee7374d58e639804d6cc5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713784699637480543,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68bde0d14316a4c3a901fddeacfd54a,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9fd198,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594b38d4c919f0bd4386634ffa22d99b282bd1d1e2d832d8b64e67b021e866d4,PodSandboxId:c83e50830d3f4db878e319b1cce7bf9160c76a605bbddbb15186b52a363c346f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713784699510289451,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7e7ddac3eb004675c7add1d1e064dc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b77e388cf4a08ba51bd96cd087b8b7ef6a23d957a16f1deb5ca55943ffe9f4,PodSandboxId:5718ac2f010731d932225775ad8b53843e8a598c210cccfd78b15fcdc08bebc4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713784699059530582,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qbq9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751a17f-e26b-4ba8-81ce-077103c0aa1c,},Annotations:map[string]string{io.kubernetes.container.hash: f5b26a55,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bec52a480e1ba363d05d69d01be4a0cee8746d920f0865e277f9c21bc87cbe3,PodSandboxId:97870b1b56dc1a85e2557bc8e33b02398db33a96e36cc229f4632c06816d7196,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713784698992367850,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ht7jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c404a830-ddce-4c49-9e54-05d45871b4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1d0fb98b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\"
:9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9e45e23c690bb79c7fd65070b3188b60b1c0041e0955b10386851453d93e8c2,PodSandboxId:82d54024bc68a08eee3c2cc0b18e7fb33cd099191b5f7459c47109f97a3f7592,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713784211175334269,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b4r5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1670d513-9071-4ee0-ae1b-7600c98019b8,},Annotations:map[string]string{io.kubernete
s.container.hash: 5113ac6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28dbe3373b660061d706be023eb9515318a583f0f1fb735faf35e2fffc13f391,PodSandboxId:126db08ea55aca85342e8b7f3c944b3e420d06d55410be6b5b8c83ed8aaea027,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713784060436950826,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ht7jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c404a830-ddce-4c49-9e54-05d45871b4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1d0fb98b,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:609e2855f754ce317a2cad749d16f773349ffc248725fb9417b473c9be8df139,PodSandboxId:84aaf42f76a8a064784395ee92d65a6be9d6ddc96fb911530ab4ab1c12faefa1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713784060349985507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-ft2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e14815-b8e9-4b60-9b2c-c7d86cccb594,},Annotations:map[string]string{io.kubernetes.container.hash: f5070328,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f43ea569f86c6da0221ad11b84975f3fab68f9999821a395b05d1afb49f5269,PodSandboxId:626e64c737b2d764452e83cdf097ca6fc3248d79c58ccd5a488c8986fdfb101d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713784057949970963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7r9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a4f7fc-5ce0-4d77-b30f-9d39cded457c,},Annotations:map[string]string{io.kubernetes.container.hash: bf585fa1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3935bd9c893d367d1a96bbfe83c0b1d50125bfe3272fe02c29b282d1d35de5,PodSandboxId:68a372e9f954bec85212f490bbd41d4da504f0947a8f1e065b8dc63d7cf5db88,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8
b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713784035610721822,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d47cc377f7ae04e53a8145721f1411a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49f85435f20c5e5f43a285a09c2344ab0eb98efa1efcef40308fec44a77803,PodSandboxId:f773251009c17f15bd2065d44e9976fe2579a48750872b77f082f3b37a1a5747,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1713784035389376110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68bde0d14316a4c3a901fddeacfd54a,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9fd198,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8f84ecfb-9443-4974-b2f5-d361c15246eb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	bdbfadb4ed8d1       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               3                   5718ac2f01073       kindnet-qbq9z
	bd7aebece9906       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   d7028b8f29863       storage-provisioner
	06935d6ef805f       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      About a minute ago   Running             kube-controller-manager   2                   c83e50830d3f4       kube-controller-manager-ha-821265
	2f9805a7cceb2       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      About a minute ago   Running             kube-apiserver            3                   3bce0f832a4b7       kube-apiserver-ha-821265
	bef4332ea1f04       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   be2b7bbc0a977       busybox-fc5497c4f-b4r5w
	e84fb7087c985       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      2 minutes ago        Running             kube-vip                  0                   0fdc24e0bdf40       kube-vip-ha-821265
	aba04655540d3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   2800fa5fa268e       coredns-7db6d8ff4d-ft2jl
	d3cbf7c282792       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   d7028b8f29863       storage-provisioner
	45ee3a04fea00       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      2 minutes ago        Running             kube-proxy                1                   e92faf278f882       kube-proxy-w7r9d
	086db7b19ea3b       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      2 minutes ago        Running             kube-scheduler            1                   c1f81420600b0       kube-scheduler-ha-821265
	d27ec30a0ad79       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   e1d9c3b2c209a       etcd-ha-821265
	fb1b67b39ae4f       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      2 minutes ago        Exited              kube-apiserver            2                   3bce0f832a4b7       kube-apiserver-ha-821265
	594b38d4c919f       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      2 minutes ago        Exited              kube-controller-manager   1                   c83e50830d3f4       kube-controller-manager-ha-821265
	65b77e388cf4a       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      2 minutes ago        Exited              kindnet-cni               2                   5718ac2f01073       kindnet-qbq9z
	4bec52a480e1b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   97870b1b56dc1       coredns-7db6d8ff4d-ht7jl
	f9e45e23c690b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   82d54024bc68a       busybox-fc5497c4f-b4r5w
	28dbe3373b660       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   126db08ea55ac       coredns-7db6d8ff4d-ht7jl
	609e2855f754c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   84aaf42f76a8a       coredns-7db6d8ff4d-ft2jl
	1f43ea569f86c       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      13 minutes ago       Exited              kube-proxy                0                   626e64c737b2d       kube-proxy-w7r9d
	2b3935bd9c893       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      13 minutes ago       Exited              kube-scheduler            0                   68a372e9f954b       kube-scheduler-ha-821265
	ba49f85435f20       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago       Exited              etcd                      0                   f773251009c17       etcd-ha-821265
	
	
	==> coredns [28dbe3373b660061d706be023eb9515318a583f0f1fb735faf35e2fffc13f391] <==
	[INFO] 10.244.1.2:43358 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000160709s
	[INFO] 10.244.1.2:55629 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000195731s
	[INFO] 10.244.1.2:44290 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121655s
	[INFO] 10.244.1.2:57358 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121564s
	[INFO] 10.244.2.2:59048 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159182s
	[INFO] 10.244.2.2:35567 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001954066s
	[INFO] 10.244.2.2:51799 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000221645s
	[INFO] 10.244.2.2:34300 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001398818s
	[INFO] 10.244.2.2:44605 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141089s
	[INFO] 10.244.2.2:60699 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114317s
	[INFO] 10.244.2.2:47652 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110384s
	[INFO] 10.244.0.4:58761 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147629s
	[INFO] 10.244.0.4:45372 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000061515s
	[INFO] 10.244.1.2:39990 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000301231s
	[INFO] 10.244.2.2:38384 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218658s
	[INFO] 10.244.2.2:42087 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096499s
	[INFO] 10.244.2.2:46418 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091631s
	[INFO] 10.244.0.4:38705 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000140004s
	[INFO] 10.244.2.2:47355 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124377s
	[INFO] 10.244.2.2:41383 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000176022s
	[INFO] 10.244.2.2:36036 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000263019s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1887&timeout=6m7s&timeoutSeconds=367&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1881&timeout=7m19s&timeoutSeconds=439&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [4bec52a480e1ba363d05d69d01be4a0cee8746d920f0865e277f9c21bc87cbe3] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[2107535893]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Apr-2024 11:18:28.211) (total time: 10002ms):
	Trace[2107535893]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (11:18:38.214)
	Trace[2107535893]: [10.002204922s] [10.002204922s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:35706->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1607095593]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Apr-2024 11:18:31.870) (total time: 12288ms):
	Trace[1607095593]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:35706->10.96.0.1:443: read: connection reset by peer 12288ms (11:18:44.158)
	Trace[1607095593]: [12.288380026s] [12.288380026s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:35706->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [609e2855f754ce317a2cad749d16f773349ffc248725fb9417b473c9be8df139] <==
	[INFO] 10.244.1.2:55844 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178035s
	[INFO] 10.244.2.2:56677 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000145596s
	[INFO] 10.244.2.2:55471 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000502508s
	[INFO] 10.244.0.4:48892 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000180363s
	[INFO] 10.244.0.4:39631 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00015636s
	[INFO] 10.244.1.2:41139 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001436054s
	[INFO] 10.244.1.2:50039 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000238831s
	[INFO] 10.244.2.2:49593 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099929s
	[INFO] 10.244.0.4:33617 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078273s
	[INFO] 10.244.0.4:35287 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000154317s
	[INFO] 10.244.1.2:52682 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133804s
	[INFO] 10.244.1.2:40594 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130792s
	[INFO] 10.244.1.2:39775 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009509s
	[INFO] 10.244.2.2:55863 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00021768s
	[INFO] 10.244.0.4:36835 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092568s
	[INFO] 10.244.0.4:53708 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00016929s
	[INFO] 10.244.0.4:44024 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000203916s
	[INFO] 10.244.1.2:50167 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158884s
	[INFO] 10.244.1.2:49103 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000120664s
	[INFO] 10.244.1.2:44739 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000212444s
	[INFO] 10.244.1.2:43569 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000207516s
	[INFO] 10.244.2.2:48876 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000228682s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1878&timeout=5m19s&timeoutSeconds=319&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [aba04655540d31d9ebf03299c794d1c3c3623ea9026908acaac007ac12a740b4] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43898->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[638564627]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Apr-2024 11:18:31.728) (total time: 12428ms):
	Trace[638564627]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43898->10.96.0.1:443: read: connection reset by peer 12428ms (11:18:44.157)
	Trace[638564627]: [12.428753633s] [12.428753633s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43898->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43892->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[63172965]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Apr-2024 11:18:31.714) (total time: 12443ms):
	Trace[63172965]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43892->10.96.0.1:443: read: connection reset by peer 12443ms (11:18:44.157)
	Trace[63172965]: [12.443525444s] [12.443525444s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43892->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-821265
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-821265
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437
	                    minikube.k8s.io/name=ha-821265
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_22T11_07_22_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 11:07:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-821265
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 11:20:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 11:19:07 +0000   Mon, 22 Apr 2024 11:07:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 11:19:07 +0000   Mon, 22 Apr 2024 11:07:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 11:19:07 +0000   Mon, 22 Apr 2024 11:07:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 11:19:07 +0000   Mon, 22 Apr 2024 11:07:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.150
	  Hostname:    ha-821265
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e3708e3d49144fe9a219d30c45824055
	  System UUID:                e3708e3d-4914-4fe9-a219-d30c45824055
	  Boot ID:                    59d6bf31-99bc-4f8f-942a-1d3384515d3f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-b4r5w              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-ft2jl             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-ht7jl             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-821265                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-qbq9z                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-821265             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-821265    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-w7r9d                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-821265             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-821265                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 105s               kube-proxy       
	  Normal   Starting                 13m                kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node ha-821265 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node ha-821265 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node ha-821265 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     13m                kubelet          Node ha-821265 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m                kubelet          Node ha-821265 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                kubelet          Node ha-821265 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           13m                node-controller  Node ha-821265 event: Registered Node ha-821265 in Controller
	  Normal   NodeReady                13m                kubelet          Node ha-821265 status is now: NodeReady
	  Normal   RegisteredNode           12m                node-controller  Node ha-821265 event: Registered Node ha-821265 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-821265 event: Registered Node ha-821265 in Controller
	  Warning  ContainerGCFailed        3m27s              kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           91s                node-controller  Node ha-821265 event: Registered Node ha-821265 in Controller
	  Normal   RegisteredNode           87s                node-controller  Node ha-821265 event: Registered Node ha-821265 in Controller
	  Normal   RegisteredNode           32s                node-controller  Node ha-821265 event: Registered Node ha-821265 in Controller
	
	
	Name:               ha-821265-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-821265-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437
	                    minikube.k8s.io/name=ha-821265
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_22T11_08_32_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 11:08:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-821265-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 11:20:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 11:19:53 +0000   Mon, 22 Apr 2024 11:19:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 11:19:53 +0000   Mon, 22 Apr 2024 11:19:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 11:19:53 +0000   Mon, 22 Apr 2024 11:19:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 11:19:53 +0000   Mon, 22 Apr 2024 11:19:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.39
	  Hostname:    ha-821265-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee4ee33670c847d689ce31a8a149631b
	  System UUID:                ee4ee336-70c8-47d6-89ce-31a8a149631b
	  Boot ID:                    13e93955-74b9-4dbe-9ed2-e9a9f309e501
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ft78k                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-821265-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-jm2pd                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-821265-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-821265-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-j2hpk                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-821265-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-821265-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 92s                  kube-proxy       
	  Normal  Starting                 12m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)    kubelet          Node ha-821265-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)    kubelet          Node ha-821265-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)    kubelet          Node ha-821265-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                  node-controller  Node ha-821265-m02 event: Registered Node ha-821265-m02 in Controller
	  Normal  RegisteredNode           12m                  node-controller  Node ha-821265-m02 event: Registered Node ha-821265-m02 in Controller
	  Normal  RegisteredNode           10m                  node-controller  Node ha-821265-m02 event: Registered Node ha-821265-m02 in Controller
	  Normal  NodeNotReady             8m52s                node-controller  Node ha-821265-m02 status is now: NodeNotReady
	  Normal  Starting                 2m4s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m4s (x8 over 2m4s)  kubelet          Node ha-821265-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s (x8 over 2m4s)  kubelet          Node ha-821265-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s (x7 over 2m4s)  kubelet          Node ha-821265-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           91s                  node-controller  Node ha-821265-m02 event: Registered Node ha-821265-m02 in Controller
	  Normal  RegisteredNode           87s                  node-controller  Node ha-821265-m02 event: Registered Node ha-821265-m02 in Controller
	  Normal  RegisteredNode           32s                  node-controller  Node ha-821265-m02 event: Registered Node ha-821265-m02 in Controller
	
	
	Name:               ha-821265-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-821265-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437
	                    minikube.k8s.io/name=ha-821265
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_22T11_09_46_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 11:09:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-821265-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 11:20:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 11:20:16 +0000   Mon, 22 Apr 2024 11:09:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 11:20:16 +0000   Mon, 22 Apr 2024 11:09:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 11:20:16 +0000   Mon, 22 Apr 2024 11:09:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 11:20:16 +0000   Mon, 22 Apr 2024 11:09:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.95
	  Hostname:    ha-821265-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fae8daa600b4453d8a90a572a44f23c8
	  System UUID:                fae8daa6-00b4-453d-8a90-a572a44f23c8
	  Boot ID:                    0f85c0bd-1354-4a74-a57b-09e673c0d84f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-fzcrw                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-821265-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-d8qgr                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-821265-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-821265-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-lmhp7                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-821265-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-821265-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 44s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-821265-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-821265-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-821265-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-821265-m03 event: Registered Node ha-821265-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-821265-m03 event: Registered Node ha-821265-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-821265-m03 event: Registered Node ha-821265-m03 in Controller
	  Normal   RegisteredNode           91s                node-controller  Node ha-821265-m03 event: Registered Node ha-821265-m03 in Controller
	  Normal   RegisteredNode           87s                node-controller  Node ha-821265-m03 event: Registered Node ha-821265-m03 in Controller
	  Normal   Starting                 63s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  63s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  63s                kubelet          Node ha-821265-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s                kubelet          Node ha-821265-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s                kubelet          Node ha-821265-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 63s                kubelet          Node ha-821265-m03 has been rebooted, boot id: 0f85c0bd-1354-4a74-a57b-09e673c0d84f
	  Normal   RegisteredNode           32s                node-controller  Node ha-821265-m03 event: Registered Node ha-821265-m03 in Controller
	
	
	Name:               ha-821265-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-821265-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437
	                    minikube.k8s.io/name=ha-821265
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_22T11_10_47_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 11:10:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-821265-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 11:20:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 11:20:39 +0000   Mon, 22 Apr 2024 11:20:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 11:20:39 +0000   Mon, 22 Apr 2024 11:20:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 11:20:39 +0000   Mon, 22 Apr 2024 11:20:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 11:20:39 +0000   Mon, 22 Apr 2024 11:20:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.252
	  Hostname:    ha-821265-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 dd9646c23a234a60a7a73b7377025a34
	  System UUID:                dd9646c2-3a23-4a60-a7a7-3b7377025a34
	  Boot ID:                    fd5616ec-d6c4-4418-82ba-4bb6990e0f81
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-gvgbm       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-hdvbv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 9m56s              kube-proxy       
	  Normal   RegisteredNode           10m                node-controller  Node ha-821265-m04 event: Registered Node ha-821265-m04 in Controller
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-821265-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-821265-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-821265-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-821265-m04 event: Registered Node ha-821265-m04 in Controller
	  Normal   RegisteredNode           9m57s              node-controller  Node ha-821265-m04 event: Registered Node ha-821265-m04 in Controller
	  Normal   NodeReady                9m51s              kubelet          Node ha-821265-m04 status is now: NodeReady
	  Normal   RegisteredNode           91s                node-controller  Node ha-821265-m04 event: Registered Node ha-821265-m04 in Controller
	  Normal   RegisteredNode           87s                node-controller  Node ha-821265-m04 event: Registered Node ha-821265-m04 in Controller
	  Normal   NodeNotReady             51s                node-controller  Node ha-821265-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           32s                node-controller  Node ha-821265-m04 event: Registered Node ha-821265-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9s (x3 over 9s)    kubelet          Node ha-821265-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x3 over 9s)    kubelet          Node ha-821265-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x3 over 9s)    kubelet          Node ha-821265-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9s (x2 over 9s)    kubelet          Node ha-821265-m04 has been rebooted, boot id: fd5616ec-d6c4-4418-82ba-4bb6990e0f81
	  Normal   NodeReady                9s (x2 over 9s)    kubelet          Node ha-821265-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr22 11:07] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.062413] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064974] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.181323] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.148920] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.299663] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +4.930467] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +0.065860] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.137174] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.064357] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.162362] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.079557] kauditd_printk_skb: 79 callbacks suppressed
	[ +16.384158] kauditd_printk_skb: 21 callbacks suppressed
	[Apr22 11:08] kauditd_printk_skb: 74 callbacks suppressed
	[Apr22 11:15] kauditd_printk_skb: 1 callbacks suppressed
	[Apr22 11:18] systemd-fstab-generator[3761]: Ignoring "noauto" option for root device
	[  +0.159564] systemd-fstab-generator[3773]: Ignoring "noauto" option for root device
	[  +0.190785] systemd-fstab-generator[3787]: Ignoring "noauto" option for root device
	[  +0.167063] systemd-fstab-generator[3799]: Ignoring "noauto" option for root device
	[  +0.313719] systemd-fstab-generator[3827]: Ignoring "noauto" option for root device
	[  +1.090326] systemd-fstab-generator[3935]: Ignoring "noauto" option for root device
	[  +3.195514] kauditd_printk_skb: 202 callbacks suppressed
	[ +11.480878] kauditd_printk_skb: 5 callbacks suppressed
	[ +10.074747] kauditd_printk_skb: 1 callbacks suppressed
	[Apr22 11:19] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [ba49f85435f20c5e5f43a285a09c2344ab0eb98efa1efcef40308fec44a77803] <==
	2024/04/22 11:16:43 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/22 11:16:43 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/22 11:16:43 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/22 11:16:43 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/22 11:16:43 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/22 11:16:43 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/22 11:16:43 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-04-22T11:16:43.48226Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"2236e2deb63504cb","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-04-22T11:16:43.482492Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e74e5cf98cfb462d"}
	{"level":"info","ts":"2024-04-22T11:16:43.482616Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e74e5cf98cfb462d"}
	{"level":"info","ts":"2024-04-22T11:16:43.482674Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e74e5cf98cfb462d"}
	{"level":"info","ts":"2024-04-22T11:16:43.482821Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d"}
	{"level":"info","ts":"2024-04-22T11:16:43.482895Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d"}
	{"level":"info","ts":"2024-04-22T11:16:43.482957Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d"}
	{"level":"info","ts":"2024-04-22T11:16:43.482994Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e74e5cf98cfb462d"}
	{"level":"info","ts":"2024-04-22T11:16:43.483047Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"67256953526d7fbe"}
	{"level":"info","ts":"2024-04-22T11:16:43.483083Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"67256953526d7fbe"}
	{"level":"info","ts":"2024-04-22T11:16:43.483133Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"67256953526d7fbe"}
	{"level":"info","ts":"2024-04-22T11:16:43.483246Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"2236e2deb63504cb","remote-peer-id":"67256953526d7fbe"}
	{"level":"info","ts":"2024-04-22T11:16:43.48331Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"2236e2deb63504cb","remote-peer-id":"67256953526d7fbe"}
	{"level":"info","ts":"2024-04-22T11:16:43.483381Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"2236e2deb63504cb","remote-peer-id":"67256953526d7fbe"}
	{"level":"info","ts":"2024-04-22T11:16:43.483417Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"67256953526d7fbe"}
	{"level":"info","ts":"2024-04-22T11:16:43.487528Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.150:2380"}
	{"level":"info","ts":"2024-04-22T11:16:43.487779Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.150:2380"}
	{"level":"info","ts":"2024-04-22T11:16:43.487838Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-821265","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.150:2380"],"advertise-client-urls":["https://192.168.39.150:2379"]}
	
	
	==> etcd [d27ec30a0ad7913b7b1f7c2670ac92c8e3c79a52b67ccae30cf41067898e375c] <==
	{"level":"warn","ts":"2024-04-22T11:19:40.702657Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"67256953526d7fbe","rtt":"0s","error":"dial tcp 192.168.39.95:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-22T11:19:44.484435Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.95:2380/version","remote-member-id":"67256953526d7fbe","error":"Get \"https://192.168.39.95:2380/version\": dial tcp 192.168.39.95:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-22T11:19:44.484509Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"67256953526d7fbe","error":"Get \"https://192.168.39.95:2380/version\": dial tcp 192.168.39.95:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-22T11:19:45.703662Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"67256953526d7fbe","rtt":"0s","error":"dial tcp 192.168.39.95:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-22T11:19:45.703775Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"67256953526d7fbe","rtt":"0s","error":"dial tcp 192.168.39.95:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-22T11:19:48.487491Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.95:2380/version","remote-member-id":"67256953526d7fbe","error":"Get \"https://192.168.39.95:2380/version\": dial tcp 192.168.39.95:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-22T11:19:48.487534Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"67256953526d7fbe","error":"Get \"https://192.168.39.95:2380/version\": dial tcp 192.168.39.95:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-22T11:19:50.704235Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"67256953526d7fbe","rtt":"0s","error":"dial tcp 192.168.39.95:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-22T11:19:50.704367Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"67256953526d7fbe","rtt":"0s","error":"dial tcp 192.168.39.95:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-22T11:19:52.489162Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.95:2380/version","remote-member-id":"67256953526d7fbe","error":"Get \"https://192.168.39.95:2380/version\": dial tcp 192.168.39.95:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-22T11:19:52.489246Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"67256953526d7fbe","error":"Get \"https://192.168.39.95:2380/version\": dial tcp 192.168.39.95:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-22T11:19:55.704705Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"67256953526d7fbe","rtt":"0s","error":"dial tcp 192.168.39.95:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-22T11:19:55.704845Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"67256953526d7fbe","rtt":"0s","error":"dial tcp 192.168.39.95:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-22T11:19:56.490844Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.95:2380/version","remote-member-id":"67256953526d7fbe","error":"Get \"https://192.168.39.95:2380/version\": dial tcp 192.168.39.95:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-22T11:19:56.490915Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"67256953526d7fbe","error":"Get \"https://192.168.39.95:2380/version\": dial tcp 192.168.39.95:2380: connect: connection refused"}
	{"level":"info","ts":"2024-04-22T11:19:59.078757Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"67256953526d7fbe"}
	{"level":"info","ts":"2024-04-22T11:19:59.108329Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"2236e2deb63504cb","to":"67256953526d7fbe","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-04-22T11:19:59.108452Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"2236e2deb63504cb","remote-peer-id":"67256953526d7fbe"}
	{"level":"info","ts":"2024-04-22T11:19:59.114318Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"2236e2deb63504cb","remote-peer-id":"67256953526d7fbe"}
	{"level":"info","ts":"2024-04-22T11:19:59.122922Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"2236e2deb63504cb","remote-peer-id":"67256953526d7fbe"}
	{"level":"info","ts":"2024-04-22T11:19:59.124026Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"2236e2deb63504cb","to":"67256953526d7fbe","stream-type":"stream Message"}
	{"level":"info","ts":"2024-04-22T11:19:59.124087Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"2236e2deb63504cb","remote-peer-id":"67256953526d7fbe"}
	{"level":"warn","ts":"2024-04-22T11:19:59.142662Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.95:45970","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-04-22T11:20:00.155319Z","caller":"traceutil/trace.go:171","msg":"trace[1805991195] transaction","detail":"{read_only:false; response_revision:2351; number_of_response:1; }","duration":"159.448883ms","start":"2024-04-22T11:19:59.995828Z","end":"2024-04-22T11:20:00.155277Z","steps":["trace[1805991195] 'process raft request'  (duration: 157.620199ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T11:20:00.705224Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"67256953526d7fbe","rtt":"0s","error":"dial tcp 192.168.39.95:2380: connect: connection refused"}
	
	
	==> kernel <==
	 11:20:48 up 14 min,  0 users,  load average: 0.49, 0.51, 0.34
	Linux ha-821265 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [65b77e388cf4a08ba51bd96cd087b8b7ef6a23d957a16f1deb5ca55943ffe9f4] <==
	I0422 11:18:19.882446       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0422 11:18:19.884707       1 main.go:107] hostIP = 192.168.39.150
	podIP = 192.168.39.150
	I0422 11:18:19.884929       1 main.go:116] setting mtu 1500 for CNI 
	I0422 11:18:19.884944       1 main.go:146] kindnetd IP family: "ipv4"
	I0422 11:18:19.884967       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0422 11:18:20.208630       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0422 11:18:22.653121       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0422 11:18:25.725425       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0422 11:18:37.729113       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0422 11:18:41.085059       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [bdbfadb4ed8d19096021578583930566b38bffb62ac75a0fa4bfa1854bc51c07] <==
	I0422 11:20:14.641264       1 main.go:250] Node ha-821265-m04 has CIDR [10.244.3.0/24] 
	I0422 11:20:24.678164       1 main.go:223] Handling node with IPs: map[192.168.39.150:{}]
	I0422 11:20:24.678290       1 main.go:227] handling current node
	I0422 11:20:24.678347       1 main.go:223] Handling node with IPs: map[192.168.39.39:{}]
	I0422 11:20:24.678366       1 main.go:250] Node ha-821265-m02 has CIDR [10.244.1.0/24] 
	I0422 11:20:24.678881       1 main.go:223] Handling node with IPs: map[192.168.39.95:{}]
	I0422 11:20:24.678963       1 main.go:250] Node ha-821265-m03 has CIDR [10.244.2.0/24] 
	I0422 11:20:24.679160       1 main.go:223] Handling node with IPs: map[192.168.39.252:{}]
	I0422 11:20:24.679238       1 main.go:250] Node ha-821265-m04 has CIDR [10.244.3.0/24] 
	I0422 11:20:34.692795       1 main.go:223] Handling node with IPs: map[192.168.39.150:{}]
	I0422 11:20:34.776900       1 main.go:227] handling current node
	I0422 11:20:34.777293       1 main.go:223] Handling node with IPs: map[192.168.39.39:{}]
	I0422 11:20:34.777863       1 main.go:250] Node ha-821265-m02 has CIDR [10.244.1.0/24] 
	I0422 11:20:34.778912       1 main.go:223] Handling node with IPs: map[192.168.39.95:{}]
	I0422 11:20:34.779034       1 main.go:250] Node ha-821265-m03 has CIDR [10.244.2.0/24] 
	I0422 11:20:34.779297       1 main.go:223] Handling node with IPs: map[192.168.39.252:{}]
	I0422 11:20:34.779701       1 main.go:250] Node ha-821265-m04 has CIDR [10.244.3.0/24] 
	I0422 11:20:44.798338       1 main.go:223] Handling node with IPs: map[192.168.39.150:{}]
	I0422 11:20:44.799252       1 main.go:227] handling current node
	I0422 11:20:44.799331       1 main.go:223] Handling node with IPs: map[192.168.39.39:{}]
	I0422 11:20:44.799356       1 main.go:250] Node ha-821265-m02 has CIDR [10.244.1.0/24] 
	I0422 11:20:44.799606       1 main.go:223] Handling node with IPs: map[192.168.39.95:{}]
	I0422 11:20:44.799647       1 main.go:250] Node ha-821265-m03 has CIDR [10.244.2.0/24] 
	I0422 11:20:44.799747       1 main.go:223] Handling node with IPs: map[192.168.39.252:{}]
	I0422 11:20:44.799770       1 main.go:250] Node ha-821265-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [2f9805a7cceb20ebe5dd98c40f4989b29929823f101fc5fa3e52ce922be823cf] <==
	I0422 11:19:05.011000       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0422 11:19:05.011039       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0422 11:19:05.110290       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0422 11:19:05.130094       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0422 11:19:05.130308       1 policy_source.go:224] refreshing policies
	I0422 11:19:05.130267       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0422 11:19:05.134676       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0422 11:19:05.135220       1 shared_informer.go:320] Caches are synced for configmaps
	I0422 11:19:05.136499       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0422 11:19:05.136690       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0422 11:19:05.137358       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0422 11:19:05.143144       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0422 11:19:05.143254       1 aggregator.go:165] initial CRD sync complete...
	I0422 11:19:05.143294       1 autoregister_controller.go:141] Starting autoregister controller
	I0422 11:19:05.143317       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0422 11:19:05.143340       1 cache.go:39] Caches are synced for autoregister controller
	I0422 11:19:05.144658       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0422 11:19:05.155166       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.95]
	I0422 11:19:05.156945       1 controller.go:615] quota admission added evaluator for: endpoints
	I0422 11:19:05.165822       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0422 11:19:05.173518       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0422 11:19:05.210386       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0422 11:19:05.944101       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0422 11:19:06.299150       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.150 192.168.39.95]
	W0422 11:19:26.297311       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.150 192.168.39.39]
	
	
	==> kube-apiserver [fb1b67b39ae4f69aabbcde5efda711e2ad29a9f6926b4f1e78b54d9fcd92ed97] <==
	I0422 11:18:20.315680       1 options.go:221] external host was not specified, using 192.168.39.150
	I0422 11:18:20.324054       1 server.go:148] Version: v1.30.0
	I0422 11:18:20.324090       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 11:18:21.328015       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0422 11:18:21.331619       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0422 11:18:21.333210       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0422 11:18:21.333408       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0422 11:18:21.333646       1 instance.go:299] Using reconciler: lease
	W0422 11:18:41.328452       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0422 11:18:41.328893       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0422 11:18:41.334765       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0422 11:18:41.334925       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [06935d6ef805f3d3cf6a05bcb64dc081d72aae88db019d142b68750d3cf1c867] <==
	I0422 11:19:17.684524       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0422 11:19:17.684757       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0422 11:19:17.687079       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0422 11:19:17.691135       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0422 11:19:17.723762       1 shared_informer.go:320] Caches are synced for resource quota
	I0422 11:19:17.782141       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0422 11:19:17.800856       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-821265-m02"
	I0422 11:19:17.801149       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-821265-m03"
	I0422 11:19:17.801198       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-821265-m04"
	I0422 11:19:17.801248       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-821265"
	I0422 11:19:17.806529       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0422 11:19:18.176354       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.439436ms"
	I0422 11:19:18.176477       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.508µs"
	I0422 11:19:18.226260       1 shared_informer.go:320] Caches are synced for garbage collector
	I0422 11:19:18.227483       1 shared_informer.go:320] Caches are synced for garbage collector
	I0422 11:19:18.227528       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0422 11:19:20.116189       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.249087ms"
	I0422 11:19:20.119765       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="2.291299ms"
	I0422 11:19:20.144384       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.570764ms"
	I0422 11:19:20.144756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="205.396µs"
	I0422 11:19:46.946964       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.261305ms"
	I0422 11:19:46.947631       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="101.147µs"
	I0422 11:20:06.149880       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.567414ms"
	I0422 11:20:06.149992       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.76µs"
	I0422 11:20:39.556893       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-821265-m04"
	
	
	==> kube-controller-manager [594b38d4c919f0bd4386634ffa22d99b282bd1d1e2d832d8b64e67b021e866d4] <==
	I0422 11:18:21.135631       1 serving.go:380] Generated self-signed cert in-memory
	I0422 11:18:21.900817       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0422 11:18:21.900911       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 11:18:21.902992       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0422 11:18:21.903104       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0422 11:18:21.903123       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0422 11:18:21.903138       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0422 11:18:42.342400       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.150:8443/healthz\": dial tcp 192.168.39.150:8443: connect: connection refused"
	
	
	==> kube-proxy [1f43ea569f86c6da0221ad11b84975f3fab68f9999821a395b05d1afb49f5269] <==
	E0422 11:15:37.406328       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1887": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 11:15:37.406398       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 11:15:37.406445       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 11:15:37.406416       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-821265&resourceVersion=1877": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 11:15:37.406486       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-821265&resourceVersion=1877": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 11:15:43.549671       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-821265&resourceVersion=1877": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 11:15:43.549798       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-821265&resourceVersion=1877": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 11:15:43.549915       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1887": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 11:15:43.549958       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1887": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 11:15:43.550025       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 11:15:43.550054       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 11:15:52.766496       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-821265&resourceVersion=1877": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 11:15:52.766686       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-821265&resourceVersion=1877": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 11:15:52.766773       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 11:15:52.767043       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 11:15:55.837807       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1887": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 11:15:55.837887       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1887": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 11:16:08.125822       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-821265&resourceVersion=1877": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 11:16:08.126316       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-821265&resourceVersion=1877": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 11:16:11.197210       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 11:16:11.197284       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 11:16:14.270702       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1887": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 11:16:14.270799       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1887": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 11:16:38.846135       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 11:16:38.846537       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [45ee3a04fea005538c47381509fdf3d9e53cfe0bb8e8e14149e912ea8a67cfd8] <==
	E0422 11:18:44.798334       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-821265\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0422 11:19:03.229700       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-821265\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0422 11:19:03.229787       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0422 11:19:03.276816       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 11:19:03.276932       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 11:19:03.276956       1 server_linux.go:165] "Using iptables Proxier"
	I0422 11:19:03.280089       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 11:19:03.280481       1 server.go:872] "Version info" version="v1.30.0"
	I0422 11:19:03.280605       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 11:19:03.282987       1 config.go:192] "Starting service config controller"
	I0422 11:19:03.283046       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 11:19:03.283096       1 config.go:101] "Starting endpoint slice config controller"
	I0422 11:19:03.283113       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 11:19:03.284023       1 config.go:319] "Starting node config controller"
	I0422 11:19:03.284068       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0422 11:19:06.303360       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0422 11:19:06.303797       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 11:19:06.304046       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 11:19:06.304014       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 11:19:06.304498       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 11:19:06.303899       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-821265&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 11:19:06.304772       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-821265&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0422 11:19:07.183470       1 shared_informer.go:320] Caches are synced for service config
	I0422 11:19:07.484367       1 shared_informer.go:320] Caches are synced for node config
	I0422 11:19:07.585488       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [086db7b19ea3bed15f2bf46fc53e6befb389e2aa6d163eb0290b45841b20a974] <==
	W0422 11:19:05.046529       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0422 11:19:05.048679       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0422 11:19:05.048861       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0422 11:19:05.048910       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0422 11:19:05.048988       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0422 11:19:05.048998       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0422 11:19:05.049034       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0422 11:19:05.049072       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0422 11:19:05.049122       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0422 11:19:05.049132       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0422 11:19:05.049225       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0422 11:19:05.049263       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0422 11:19:05.049308       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0422 11:19:05.049316       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0422 11:19:05.049359       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0422 11:19:05.049407       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0422 11:19:05.049444       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0422 11:19:05.049458       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0422 11:19:05.049499       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0422 11:19:05.049508       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0422 11:19:05.049654       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0422 11:19:05.049693       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0422 11:19:05.049794       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0422 11:19:05.049831       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0422 11:19:18.960696       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [2b3935bd9c893d367d1a96bbfe83c0b1d50125bfe3272fe02c29b282d1d35de5] <==
	W0422 11:16:39.610805       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0422 11:16:39.610902       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0422 11:16:40.000188       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0422 11:16:40.000306       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0422 11:16:40.072335       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0422 11:16:40.072405       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0422 11:16:40.169436       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0422 11:16:40.169497       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0422 11:16:40.179026       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0422 11:16:40.179061       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0422 11:16:40.312136       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0422 11:16:40.312206       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0422 11:16:40.332795       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0422 11:16:40.332883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0422 11:16:40.405487       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0422 11:16:40.405706       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0422 11:16:40.498938       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0422 11:16:40.499043       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0422 11:16:42.007234       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0422 11:16:42.007294       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0422 11:16:42.487490       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0422 11:16:42.487759       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0422 11:16:43.289079       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0422 11:16:43.289144       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0422 11:16:43.406821       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 22 11:19:06 ha-821265 kubelet[1370]: W0422 11:19:06.301121    1370 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	Apr 22 11:19:06 ha-821265 kubelet[1370]: E0422 11:19:06.302129    1370 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	Apr 22 11:19:06 ha-821265 kubelet[1370]: W0422 11:19:06.301281    1370 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)coredns&resourceVersion=1860": dial tcp 192.168.39.254:8443: connect: no route to host
	Apr 22 11:19:06 ha-821265 kubelet[1370]: E0422 11:19:06.302290    1370 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)coredns&resourceVersion=1860": dial tcp 192.168.39.254:8443: connect: no route to host
	Apr 22 11:19:06 ha-821265 kubelet[1370]: E0422 11:19:06.301223    1370 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-821265\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-821265?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Apr 22 11:19:07 ha-821265 kubelet[1370]: I0422 11:19:07.582804    1370 scope.go:117] "RemoveContainer" containerID="65b77e388cf4a08ba51bd96cd087b8b7ef6a23d957a16f1deb5ca55943ffe9f4"
	Apr 22 11:19:07 ha-821265 kubelet[1370]: E0422 11:19:07.583153    1370 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-qbq9z_kube-system(9751a17f-e26b-4ba8-81ce-077103c0aa1c)\"" pod="kube-system/kindnet-qbq9z" podUID="9751a17f-e26b-4ba8-81ce-077103c0aa1c"
	Apr 22 11:19:08 ha-821265 kubelet[1370]: I0422 11:19:08.582742    1370 scope.go:117] "RemoveContainer" containerID="d3cbf7c282792930e1df477971a4bd28b78cb49c295f6e0ac2c8a454824de5d2"
	Apr 22 11:19:08 ha-821265 kubelet[1370]: E0422 11:19:08.583119    1370 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(4b44da93-f3fa-49b7-a701-5ab7a430374f)\"" pod="kube-system/storage-provisioner" podUID="4b44da93-f3fa-49b7-a701-5ab7a430374f"
	Apr 22 11:19:21 ha-821265 kubelet[1370]: I0422 11:19:21.592492    1370 scope.go:117] "RemoveContainer" containerID="65b77e388cf4a08ba51bd96cd087b8b7ef6a23d957a16f1deb5ca55943ffe9f4"
	Apr 22 11:19:21 ha-821265 kubelet[1370]: E0422 11:19:21.592795    1370 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-qbq9z_kube-system(9751a17f-e26b-4ba8-81ce-077103c0aa1c)\"" pod="kube-system/kindnet-qbq9z" podUID="9751a17f-e26b-4ba8-81ce-077103c0aa1c"
	Apr 22 11:19:21 ha-821265 kubelet[1370]: E0422 11:19:21.623275    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 11:19:21 ha-821265 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 11:19:21 ha-821265 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 11:19:21 ha-821265 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 11:19:21 ha-821265 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 11:19:22 ha-821265 kubelet[1370]: I0422 11:19:22.582905    1370 scope.go:117] "RemoveContainer" containerID="d3cbf7c282792930e1df477971a4bd28b78cb49c295f6e0ac2c8a454824de5d2"
	Apr 22 11:19:33 ha-821265 kubelet[1370]: I0422 11:19:33.582422    1370 scope.go:117] "RemoveContainer" containerID="65b77e388cf4a08ba51bd96cd087b8b7ef6a23d957a16f1deb5ca55943ffe9f4"
	Apr 22 11:19:55 ha-821265 kubelet[1370]: I0422 11:19:55.582463    1370 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-821265" podUID="9322f0ee-9e3e-4585-9388-44ccd1417371"
	Apr 22 11:19:55 ha-821265 kubelet[1370]: I0422 11:19:55.604931    1370 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-821265"
	Apr 22 11:20:21 ha-821265 kubelet[1370]: E0422 11:20:21.619185    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 11:20:21 ha-821265 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 11:20:21 ha-821265 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 11:20:21 ha-821265 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 11:20:21 ha-821265 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 11:20:47.393306   35744 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18711-7633/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-821265 -n ha-821265
helpers_test.go:261: (dbg) Run:  kubectl --context ha-821265 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (370.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 stop -v=7 --alsologtostderr
E0422 11:21:17.643946   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
E0422 11:21:57.324104   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-821265 stop -v=7 --alsologtostderr: exit status 82 (2m0.491554803s)

                                                
                                                
-- stdout --
	* Stopping node "ha-821265-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 11:21:07.995185   36149 out.go:291] Setting OutFile to fd 1 ...
	I0422 11:21:07.995326   36149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:21:07.995335   36149 out.go:304] Setting ErrFile to fd 2...
	I0422 11:21:07.995338   36149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:21:07.995516   36149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
	I0422 11:21:07.995750   36149 out.go:298] Setting JSON to false
	I0422 11:21:07.995830   36149 mustload.go:65] Loading cluster: ha-821265
	I0422 11:21:07.996161   36149 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:21:07.996252   36149 profile.go:143] Saving config to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/config.json ...
	I0422 11:21:07.996424   36149 mustload.go:65] Loading cluster: ha-821265
	I0422 11:21:07.996560   36149 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:21:07.996587   36149 stop.go:39] StopHost: ha-821265-m04
	I0422 11:21:07.997014   36149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:21:07.997055   36149 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:21:08.012350   36149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44843
	I0422 11:21:08.012849   36149 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:21:08.013458   36149 main.go:141] libmachine: Using API Version  1
	I0422 11:21:08.013496   36149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:21:08.013800   36149 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:21:08.016319   36149 out.go:177] * Stopping node "ha-821265-m04"  ...
	I0422 11:21:08.017866   36149 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0422 11:21:08.017895   36149 main.go:141] libmachine: (ha-821265-m04) Calling .DriverName
	I0422 11:21:08.018168   36149 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0422 11:21:08.018195   36149 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHHostname
	I0422 11:21:08.020974   36149 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:21:08.021373   36149 main.go:141] libmachine: (ha-821265-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:86:f0", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:20:33 +0000 UTC Type:0 Mac:52:54:00:02:86:f0 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-821265-m04 Clientid:01:52:54:00:02:86:f0}
	I0422 11:21:08.021401   36149 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:21:08.021505   36149 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHPort
	I0422 11:21:08.021670   36149 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHKeyPath
	I0422 11:21:08.021838   36149 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHUsername
	I0422 11:21:08.021992   36149 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m04/id_rsa Username:docker}
	I0422 11:21:08.104205   36149 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0422 11:21:08.159610   36149 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0422 11:21:08.214425   36149 main.go:141] libmachine: Stopping "ha-821265-m04"...
	I0422 11:21:08.214455   36149 main.go:141] libmachine: (ha-821265-m04) Calling .GetState
	I0422 11:21:08.216045   36149 main.go:141] libmachine: (ha-821265-m04) Calling .Stop
	I0422 11:21:08.219996   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 0/120
	I0422 11:21:09.221495   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 1/120
	I0422 11:21:10.223114   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 2/120
	I0422 11:21:11.224508   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 3/120
	I0422 11:21:12.225863   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 4/120
	I0422 11:21:13.227756   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 5/120
	I0422 11:21:14.229151   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 6/120
	I0422 11:21:15.231224   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 7/120
	I0422 11:21:16.232497   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 8/120
	I0422 11:21:17.234060   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 9/120
	I0422 11:21:18.235696   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 10/120
	I0422 11:21:19.237320   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 11/120
	I0422 11:21:20.239302   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 12/120
	I0422 11:21:21.240743   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 13/120
	I0422 11:21:22.242198   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 14/120
	I0422 11:21:23.244342   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 15/120
	I0422 11:21:24.246167   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 16/120
	I0422 11:21:25.247681   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 17/120
	I0422 11:21:26.249332   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 18/120
	I0422 11:21:27.251332   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 19/120
	I0422 11:21:28.253523   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 20/120
	I0422 11:21:29.255611   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 21/120
	I0422 11:21:30.256864   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 22/120
	I0422 11:21:31.258116   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 23/120
	I0422 11:21:32.259609   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 24/120
	I0422 11:21:33.262029   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 25/120
	I0422 11:21:34.263391   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 26/120
	I0422 11:21:35.264689   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 27/120
	I0422 11:21:36.266052   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 28/120
	I0422 11:21:37.267634   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 29/120
	I0422 11:21:38.269770   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 30/120
	I0422 11:21:39.271951   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 31/120
	I0422 11:21:40.273289   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 32/120
	I0422 11:21:41.274850   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 33/120
	I0422 11:21:42.276394   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 34/120
	I0422 11:21:43.278802   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 35/120
	I0422 11:21:44.280300   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 36/120
	I0422 11:21:45.281920   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 37/120
	I0422 11:21:46.283126   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 38/120
	I0422 11:21:47.284607   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 39/120
	I0422 11:21:48.286771   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 40/120
	I0422 11:21:49.288082   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 41/120
	I0422 11:21:50.289410   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 42/120
	I0422 11:21:51.291229   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 43/120
	I0422 11:21:52.292656   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 44/120
	I0422 11:21:53.294075   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 45/120
	I0422 11:21:54.295494   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 46/120
	I0422 11:21:55.297748   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 47/120
	I0422 11:21:56.299250   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 48/120
	I0422 11:21:57.300785   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 49/120
	I0422 11:21:58.302702   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 50/120
	I0422 11:21:59.304739   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 51/120
	I0422 11:22:00.306067   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 52/120
	I0422 11:22:01.307629   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 53/120
	I0422 11:22:02.309156   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 54/120
	I0422 11:22:03.311207   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 55/120
	I0422 11:22:04.313067   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 56/120
	I0422 11:22:05.315445   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 57/120
	I0422 11:22:06.316996   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 58/120
	I0422 11:22:07.319354   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 59/120
	I0422 11:22:08.321375   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 60/120
	I0422 11:22:09.323713   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 61/120
	I0422 11:22:10.325456   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 62/120
	I0422 11:22:11.326787   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 63/120
	I0422 11:22:12.328233   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 64/120
	I0422 11:22:13.330849   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 65/120
	I0422 11:22:14.332181   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 66/120
	I0422 11:22:15.333550   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 67/120
	I0422 11:22:16.335444   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 68/120
	I0422 11:22:17.337026   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 69/120
	I0422 11:22:18.339241   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 70/120
	I0422 11:22:19.340647   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 71/120
	I0422 11:22:20.342112   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 72/120
	I0422 11:22:21.343523   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 73/120
	I0422 11:22:22.344902   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 74/120
	I0422 11:22:23.346931   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 75/120
	I0422 11:22:24.348155   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 76/120
	I0422 11:22:25.349514   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 77/120
	I0422 11:22:26.351275   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 78/120
	I0422 11:22:27.352560   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 79/120
	I0422 11:22:28.354661   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 80/120
	I0422 11:22:29.355988   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 81/120
	I0422 11:22:30.357395   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 82/120
	I0422 11:22:31.359136   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 83/120
	I0422 11:22:32.360878   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 84/120
	I0422 11:22:33.362779   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 85/120
	I0422 11:22:34.364155   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 86/120
	I0422 11:22:35.365531   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 87/120
	I0422 11:22:36.367327   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 88/120
	I0422 11:22:37.368490   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 89/120
	I0422 11:22:38.370895   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 90/120
	I0422 11:22:39.372308   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 91/120
	I0422 11:22:40.373860   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 92/120
	I0422 11:22:41.375489   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 93/120
	I0422 11:22:42.376745   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 94/120
	I0422 11:22:43.378726   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 95/120
	I0422 11:22:44.380164   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 96/120
	I0422 11:22:45.382170   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 97/120
	I0422 11:22:46.383627   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 98/120
	I0422 11:22:47.384954   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 99/120
	I0422 11:22:48.387028   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 100/120
	I0422 11:22:49.388337   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 101/120
	I0422 11:22:50.389623   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 102/120
	I0422 11:22:51.391364   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 103/120
	I0422 11:22:52.393478   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 104/120
	I0422 11:22:53.394898   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 105/120
	I0422 11:22:54.395995   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 106/120
	I0422 11:22:55.397290   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 107/120
	I0422 11:22:56.399082   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 108/120
	I0422 11:22:57.401169   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 109/120
	I0422 11:22:58.403218   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 110/120
	I0422 11:22:59.405073   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 111/120
	I0422 11:23:00.406498   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 112/120
	I0422 11:23:01.408732   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 113/120
	I0422 11:23:02.409965   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 114/120
	I0422 11:23:03.412329   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 115/120
	I0422 11:23:04.414354   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 116/120
	I0422 11:23:05.416100   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 117/120
	I0422 11:23:06.417441   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 118/120
	I0422 11:23:07.419203   36149 main.go:141] libmachine: (ha-821265-m04) Waiting for machine to stop 119/120
	I0422 11:23:08.420449   36149 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0422 11:23:08.420495   36149 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0422 11:23:08.422436   36149 out.go:177] 
	W0422 11:23:08.423823   36149 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0422 11:23:08.423843   36149 out.go:239] * 
	* 
	W0422 11:23:08.426194   36149 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0422 11:23:08.427701   36149 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-821265 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-821265 status -v=7 --alsologtostderr: exit status 3 (19.004340927s)

                                                
                                                
-- stdout --
	ha-821265
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-821265-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-821265-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 11:23:08.484475   36564 out.go:291] Setting OutFile to fd 1 ...
	I0422 11:23:08.484610   36564 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:23:08.484615   36564 out.go:304] Setting ErrFile to fd 2...
	I0422 11:23:08.484619   36564 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:23:08.484832   36564 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
	I0422 11:23:08.485020   36564 out.go:298] Setting JSON to false
	I0422 11:23:08.485048   36564 mustload.go:65] Loading cluster: ha-821265
	I0422 11:23:08.485122   36564 notify.go:220] Checking for updates...
	I0422 11:23:08.485515   36564 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:23:08.485532   36564 status.go:255] checking status of ha-821265 ...
	I0422 11:23:08.485964   36564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:23:08.486036   36564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:23:08.502048   36564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41667
	I0422 11:23:08.502463   36564 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:23:08.503052   36564 main.go:141] libmachine: Using API Version  1
	I0422 11:23:08.503085   36564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:23:08.503510   36564 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:23:08.503719   36564 main.go:141] libmachine: (ha-821265) Calling .GetState
	I0422 11:23:08.505556   36564 status.go:330] ha-821265 host status = "Running" (err=<nil>)
	I0422 11:23:08.505582   36564 host.go:66] Checking if "ha-821265" exists ...
	I0422 11:23:08.505864   36564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:23:08.505905   36564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:23:08.520025   36564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42607
	I0422 11:23:08.520482   36564 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:23:08.520954   36564 main.go:141] libmachine: Using API Version  1
	I0422 11:23:08.520978   36564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:23:08.521266   36564 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:23:08.521462   36564 main.go:141] libmachine: (ha-821265) Calling .GetIP
	I0422 11:23:08.524372   36564 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:23:08.524801   36564 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:23:08.524833   36564 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:23:08.524974   36564 host.go:66] Checking if "ha-821265" exists ...
	I0422 11:23:08.525326   36564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:23:08.525365   36564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:23:08.540112   36564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35561
	I0422 11:23:08.540594   36564 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:23:08.541131   36564 main.go:141] libmachine: Using API Version  1
	I0422 11:23:08.541148   36564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:23:08.541513   36564 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:23:08.541683   36564 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:23:08.541878   36564 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:23:08.541902   36564 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:23:08.544632   36564 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:23:08.545106   36564 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:23:08.545134   36564 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:23:08.545255   36564 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:23:08.545494   36564 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:23:08.545650   36564 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:23:08.545832   36564 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:23:08.633147   36564 ssh_runner.go:195] Run: systemctl --version
	I0422 11:23:08.642050   36564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:23:08.660194   36564 kubeconfig.go:125] found "ha-821265" server: "https://192.168.39.254:8443"
	I0422 11:23:08.660225   36564 api_server.go:166] Checking apiserver status ...
	I0422 11:23:08.660271   36564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 11:23:08.683092   36564 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5073/cgroup
	W0422 11:23:08.696123   36564 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5073/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 11:23:08.696169   36564 ssh_runner.go:195] Run: ls
	I0422 11:23:08.703739   36564 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 11:23:08.711193   36564 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 11:23:08.711215   36564 status.go:422] ha-821265 apiserver status = Running (err=<nil>)
	I0422 11:23:08.711225   36564 status.go:257] ha-821265 status: &{Name:ha-821265 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 11:23:08.711239   36564 status.go:255] checking status of ha-821265-m02 ...
	I0422 11:23:08.711515   36564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:23:08.711549   36564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:23:08.726615   36564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39581
	I0422 11:23:08.727051   36564 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:23:08.727532   36564 main.go:141] libmachine: Using API Version  1
	I0422 11:23:08.727552   36564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:23:08.727863   36564 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:23:08.728070   36564 main.go:141] libmachine: (ha-821265-m02) Calling .GetState
	I0422 11:23:08.729665   36564 status.go:330] ha-821265-m02 host status = "Running" (err=<nil>)
	I0422 11:23:08.729682   36564 host.go:66] Checking if "ha-821265-m02" exists ...
	I0422 11:23:08.729981   36564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:23:08.730020   36564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:23:08.744430   36564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44275
	I0422 11:23:08.744912   36564 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:23:08.745498   36564 main.go:141] libmachine: Using API Version  1
	I0422 11:23:08.745517   36564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:23:08.745895   36564 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:23:08.746095   36564 main.go:141] libmachine: (ha-821265-m02) Calling .GetIP
	I0422 11:23:08.748962   36564 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:23:08.749453   36564 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:18:30 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:23:08.749478   36564 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:23:08.749648   36564 host.go:66] Checking if "ha-821265-m02" exists ...
	I0422 11:23:08.749935   36564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:23:08.749967   36564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:23:08.765533   36564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39879
	I0422 11:23:08.765971   36564 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:23:08.766491   36564 main.go:141] libmachine: Using API Version  1
	I0422 11:23:08.766514   36564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:23:08.766844   36564 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:23:08.766991   36564 main.go:141] libmachine: (ha-821265-m02) Calling .DriverName
	I0422 11:23:08.767189   36564 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:23:08.767206   36564 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHHostname
	I0422 11:23:08.770123   36564 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:23:08.770672   36564 main.go:141] libmachine: (ha-821265-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:2d:41", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:18:30 +0000 UTC Type:0 Mac:52:54:00:3b:2d:41 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:ha-821265-m02 Clientid:01:52:54:00:3b:2d:41}
	I0422 11:23:08.770706   36564 main.go:141] libmachine: (ha-821265-m02) DBG | domain ha-821265-m02 has defined IP address 192.168.39.39 and MAC address 52:54:00:3b:2d:41 in network mk-ha-821265
	I0422 11:23:08.770878   36564 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHPort
	I0422 11:23:08.771062   36564 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHKeyPath
	I0422 11:23:08.771229   36564 main.go:141] libmachine: (ha-821265-m02) Calling .GetSSHUsername
	I0422 11:23:08.771461   36564 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m02/id_rsa Username:docker}
	I0422 11:23:08.867863   36564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:23:08.891406   36564 kubeconfig.go:125] found "ha-821265" server: "https://192.168.39.254:8443"
	I0422 11:23:08.891430   36564 api_server.go:166] Checking apiserver status ...
	I0422 11:23:08.891461   36564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 11:23:08.911298   36564 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1425/cgroup
	W0422 11:23:08.926259   36564 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1425/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 11:23:08.926329   36564 ssh_runner.go:195] Run: ls
	I0422 11:23:08.932944   36564 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 11:23:08.937533   36564 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 11:23:08.937556   36564 status.go:422] ha-821265-m02 apiserver status = Running (err=<nil>)
	I0422 11:23:08.937564   36564 status.go:257] ha-821265-m02 status: &{Name:ha-821265-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 11:23:08.937578   36564 status.go:255] checking status of ha-821265-m04 ...
	I0422 11:23:08.937908   36564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:23:08.937944   36564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:23:08.952912   36564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38947
	I0422 11:23:08.953395   36564 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:23:08.953846   36564 main.go:141] libmachine: Using API Version  1
	I0422 11:23:08.953865   36564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:23:08.954213   36564 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:23:08.954392   36564 main.go:141] libmachine: (ha-821265-m04) Calling .GetState
	I0422 11:23:08.955889   36564 status.go:330] ha-821265-m04 host status = "Running" (err=<nil>)
	I0422 11:23:08.955904   36564 host.go:66] Checking if "ha-821265-m04" exists ...
	I0422 11:23:08.956292   36564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:23:08.956347   36564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:23:08.972684   36564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33237
	I0422 11:23:08.973087   36564 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:23:08.973598   36564 main.go:141] libmachine: Using API Version  1
	I0422 11:23:08.973632   36564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:23:08.973931   36564 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:23:08.974138   36564 main.go:141] libmachine: (ha-821265-m04) Calling .GetIP
	I0422 11:23:08.976971   36564 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:23:08.977343   36564 main.go:141] libmachine: (ha-821265-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:86:f0", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:20:33 +0000 UTC Type:0 Mac:52:54:00:02:86:f0 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-821265-m04 Clientid:01:52:54:00:02:86:f0}
	I0422 11:23:08.977373   36564 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:23:08.977531   36564 host.go:66] Checking if "ha-821265-m04" exists ...
	I0422 11:23:08.977851   36564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:23:08.977890   36564 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:23:08.993039   36564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34383
	I0422 11:23:08.993429   36564 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:23:08.993929   36564 main.go:141] libmachine: Using API Version  1
	I0422 11:23:08.993962   36564 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:23:08.994268   36564 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:23:08.994494   36564 main.go:141] libmachine: (ha-821265-m04) Calling .DriverName
	I0422 11:23:08.994688   36564 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:23:08.994709   36564 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHHostname
	I0422 11:23:08.997422   36564 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:23:08.997875   36564 main.go:141] libmachine: (ha-821265-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:86:f0", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:20:33 +0000 UTC Type:0 Mac:52:54:00:02:86:f0 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-821265-m04 Clientid:01:52:54:00:02:86:f0}
	I0422 11:23:08.997903   36564 main.go:141] libmachine: (ha-821265-m04) DBG | domain ha-821265-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:02:86:f0 in network mk-ha-821265
	I0422 11:23:08.997998   36564 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHPort
	I0422 11:23:08.998166   36564 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHKeyPath
	I0422 11:23:08.998330   36564 main.go:141] libmachine: (ha-821265-m04) Calling .GetSSHUsername
	I0422 11:23:08.998433   36564 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265-m04/id_rsa Username:docker}
	W0422 11:23:27.432991   36564 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.252:22: connect: no route to host
	W0422 11:23:27.433089   36564 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.252:22: connect: no route to host
	E0422 11:23:27.433105   36564 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.252:22: connect: no route to host
	I0422 11:23:27.433113   36564 status.go:257] ha-821265-m04 status: &{Name:ha-821265-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0422 11:23:27.433129   36564 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.252:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-821265 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-821265 -n ha-821265
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-821265 logs -n 25: (1.933681289s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-821265 ssh -n ha-821265-m02 sudo cat                                          | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | /home/docker/cp-test_ha-821265-m03_ha-821265-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-821265 cp ha-821265-m03:/home/docker/cp-test.txt                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m04:/home/docker/cp-test_ha-821265-m03_ha-821265-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n                                                                 | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n ha-821265-m04 sudo cat                                          | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | /home/docker/cp-test_ha-821265-m03_ha-821265-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-821265 cp testdata/cp-test.txt                                                | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n                                                                 | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-821265 cp ha-821265-m04:/home/docker/cp-test.txt                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1102049705/001/cp-test_ha-821265-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n                                                                 | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-821265 cp ha-821265-m04:/home/docker/cp-test.txt                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265:/home/docker/cp-test_ha-821265-m04_ha-821265.txt                       |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n                                                                 | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n ha-821265 sudo cat                                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | /home/docker/cp-test_ha-821265-m04_ha-821265.txt                                 |           |         |         |                     |                     |
	| cp      | ha-821265 cp ha-821265-m04:/home/docker/cp-test.txt                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m02:/home/docker/cp-test_ha-821265-m04_ha-821265-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n                                                                 | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n ha-821265-m02 sudo cat                                          | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | /home/docker/cp-test_ha-821265-m04_ha-821265-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-821265 cp ha-821265-m04:/home/docker/cp-test.txt                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m03:/home/docker/cp-test_ha-821265-m04_ha-821265-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n                                                                 | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | ha-821265-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-821265 ssh -n ha-821265-m03 sudo cat                                          | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC | 22 Apr 24 11:11 UTC |
	|         | /home/docker/cp-test_ha-821265-m04_ha-821265-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-821265 node stop m02 -v=7                                                     | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-821265 node start m02 -v=7                                                    | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:13 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-821265 -v=7                                                           | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-821265 -v=7                                                                | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-821265 --wait=true -v=7                                                    | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:16 UTC | 22 Apr 24 11:20 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-821265                                                                | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:20 UTC |                     |
	| node    | ha-821265 node delete m03 -v=7                                                   | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:20 UTC | 22 Apr 24 11:21 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-821265 stop -v=7                                                              | ha-821265 | jenkins | v1.33.0 | 22 Apr 24 11:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 11:16:42
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 11:16:42.486997   33971 out.go:291] Setting OutFile to fd 1 ...
	I0422 11:16:42.487233   33971 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:16:42.487241   33971 out.go:304] Setting ErrFile to fd 2...
	I0422 11:16:42.487245   33971 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:16:42.487432   33971 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
	I0422 11:16:42.488117   33971 out.go:298] Setting JSON to false
	I0422 11:16:42.489889   33971 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3546,"bootTime":1713781057,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 11:16:42.489955   33971 start.go:139] virtualization: kvm guest
	I0422 11:16:42.492400   33971 out.go:177] * [ha-821265] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 11:16:42.494287   33971 notify.go:220] Checking for updates...
	I0422 11:16:42.494297   33971 out.go:177]   - MINIKUBE_LOCATION=18711
	I0422 11:16:42.495769   33971 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 11:16:42.497132   33971 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18711-7633/kubeconfig
	I0422 11:16:42.498694   33971 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 11:16:42.500058   33971 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 11:16:42.501395   33971 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 11:16:42.503034   33971 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:16:42.503131   33971 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 11:16:42.503542   33971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:16:42.503592   33971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:16:42.518858   33971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36535
	I0422 11:16:42.519381   33971 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:16:42.519938   33971 main.go:141] libmachine: Using API Version  1
	I0422 11:16:42.519960   33971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:16:42.520314   33971 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:16:42.520543   33971 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:16:42.556170   33971 out.go:177] * Using the kvm2 driver based on existing profile
	I0422 11:16:42.557837   33971 start.go:297] selected driver: kvm2
	I0422 11:16:42.557852   33971 start.go:901] validating driver "kvm2" against &{Name:ha-821265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-82
1265 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.252 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth
:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 11:16:42.558004   33971 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 11:16:42.558318   33971 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 11:16:42.558395   33971 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18711-7633/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 11:16:42.572492   33971 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 11:16:42.573312   33971 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 11:16:42.573397   33971 cni.go:84] Creating CNI manager for ""
	I0422 11:16:42.573414   33971 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0422 11:16:42.573486   33971 start.go:340] cluster config:
	{Name:ha-821265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-821265 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.252 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 11:16:42.573638   33971 iso.go:125] acquiring lock: {Name:mkb6ac9fd17ffabc92a94047094130aad6203a95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 11:16:42.576329   33971 out.go:177] * Starting "ha-821265" primary control-plane node in "ha-821265" cluster
	I0422 11:16:42.577901   33971 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 11:16:42.577943   33971 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0422 11:16:42.577953   33971 cache.go:56] Caching tarball of preloaded images
	I0422 11:16:42.578051   33971 preload.go:173] Found /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 11:16:42.578064   33971 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 11:16:42.578195   33971 profile.go:143] Saving config to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/config.json ...
	I0422 11:16:42.578419   33971 start.go:360] acquireMachinesLock for ha-821265: {Name:mk5cb9b294e703b264c1f97ac968ffd01e93b576 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 11:16:42.578481   33971 start.go:364] duration metric: took 40.744µs to acquireMachinesLock for "ha-821265"
	I0422 11:16:42.578499   33971 start.go:96] Skipping create...Using existing machine configuration
	I0422 11:16:42.578507   33971 fix.go:54] fixHost starting: 
	I0422 11:16:42.578781   33971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:16:42.578818   33971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:16:42.592489   33971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34723
	I0422 11:16:42.592859   33971 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:16:42.593293   33971 main.go:141] libmachine: Using API Version  1
	I0422 11:16:42.593320   33971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:16:42.593624   33971 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:16:42.593827   33971 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:16:42.593995   33971 main.go:141] libmachine: (ha-821265) Calling .GetState
	I0422 11:16:42.595453   33971 fix.go:112] recreateIfNeeded on ha-821265: state=Running err=<nil>
	W0422 11:16:42.595480   33971 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 11:16:42.597619   33971 out.go:177] * Updating the running kvm2 "ha-821265" VM ...
	I0422 11:16:42.598928   33971 machine.go:94] provisionDockerMachine start ...
	I0422 11:16:42.598950   33971 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:16:42.599144   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:16:42.601351   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:16:42.601721   33971 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:16:42.601740   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:16:42.601874   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:16:42.602032   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:16:42.602160   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:16:42.602274   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:16:42.602447   33971 main.go:141] libmachine: Using SSH client type: native
	I0422 11:16:42.602660   33971 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0422 11:16:42.602671   33971 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 11:16:42.718874   33971 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-821265
	
	I0422 11:16:42.718903   33971 main.go:141] libmachine: (ha-821265) Calling .GetMachineName
	I0422 11:16:42.719177   33971 buildroot.go:166] provisioning hostname "ha-821265"
	I0422 11:16:42.719205   33971 main.go:141] libmachine: (ha-821265) Calling .GetMachineName
	I0422 11:16:42.719410   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:16:42.722145   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:16:42.722526   33971 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:16:42.722563   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:16:42.722684   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:16:42.722852   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:16:42.723032   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:16:42.723192   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:16:42.723364   33971 main.go:141] libmachine: Using SSH client type: native
	I0422 11:16:42.723553   33971 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0422 11:16:42.723568   33971 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-821265 && echo "ha-821265" | sudo tee /etc/hostname
	I0422 11:16:42.851081   33971 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-821265
	
	I0422 11:16:42.851110   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:16:42.853907   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:16:42.854315   33971 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:16:42.854353   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:16:42.854559   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:16:42.854739   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:16:42.854901   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:16:42.855050   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:16:42.855196   33971 main.go:141] libmachine: Using SSH client type: native
	I0422 11:16:42.855431   33971 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0422 11:16:42.855455   33971 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-821265' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-821265/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-821265' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 11:16:42.962501   33971 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 11:16:42.962527   33971 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18711-7633/.minikube CaCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18711-7633/.minikube}
	I0422 11:16:42.962560   33971 buildroot.go:174] setting up certificates
	I0422 11:16:42.962571   33971 provision.go:84] configureAuth start
	I0422 11:16:42.962581   33971 main.go:141] libmachine: (ha-821265) Calling .GetMachineName
	I0422 11:16:42.962854   33971 main.go:141] libmachine: (ha-821265) Calling .GetIP
	I0422 11:16:42.965480   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:16:42.965864   33971 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:16:42.965886   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:16:42.966034   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:16:42.968147   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:16:42.968482   33971 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:16:42.968507   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:16:42.968625   33971 provision.go:143] copyHostCerts
	I0422 11:16:42.968657   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem
	I0422 11:16:42.968685   33971 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem, removing ...
	I0422 11:16:42.968694   33971 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem
	I0422 11:16:42.968813   33971 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem (1123 bytes)
	I0422 11:16:42.968923   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem
	I0422 11:16:42.968950   33971 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem, removing ...
	I0422 11:16:42.968965   33971 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem
	I0422 11:16:42.969002   33971 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem (1679 bytes)
	I0422 11:16:42.969091   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem
	I0422 11:16:42.969110   33971 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem, removing ...
	I0422 11:16:42.969117   33971 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem
	I0422 11:16:42.969139   33971 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem (1078 bytes)
	I0422 11:16:42.969181   33971 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem org=jenkins.ha-821265 san=[127.0.0.1 192.168.39.150 ha-821265 localhost minikube]
	I0422 11:16:43.101270   33971 provision.go:177] copyRemoteCerts
	I0422 11:16:43.101327   33971 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 11:16:43.101348   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:16:43.103986   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:16:43.104365   33971 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:16:43.104400   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:16:43.104577   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:16:43.104854   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:16:43.105060   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:16:43.105222   33971 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:16:43.190030   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0422 11:16:43.190115   33971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 11:16:43.219245   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0422 11:16:43.219317   33971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0422 11:16:43.247742   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0422 11:16:43.247805   33971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 11:16:43.278390   33971 provision.go:87] duration metric: took 315.803849ms to configureAuth
	I0422 11:16:43.278415   33971 buildroot.go:189] setting minikube options for container-runtime
	I0422 11:16:43.278619   33971 config.go:182] Loaded profile config "ha-821265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:16:43.278687   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:16:43.281124   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:16:43.281536   33971 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:16:43.281564   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:16:43.281697   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:16:43.281893   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:16:43.282066   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:16:43.282203   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:16:43.282378   33971 main.go:141] libmachine: Using SSH client type: native
	I0422 11:16:43.282577   33971 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0422 11:16:43.282598   33971 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 11:18:14.302771   33971 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 11:18:14.302800   33971 machine.go:97] duration metric: took 1m31.703853743s to provisionDockerMachine
	I0422 11:18:14.302814   33971 start.go:293] postStartSetup for "ha-821265" (driver="kvm2")
	I0422 11:18:14.302827   33971 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 11:18:14.302845   33971 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:18:14.303187   33971 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 11:18:14.303223   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:18:14.306285   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:18:14.306670   33971 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:18:14.306692   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:18:14.306823   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:18:14.306979   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:18:14.307136   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:18:14.307283   33971 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:18:14.388598   33971 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 11:18:14.393911   33971 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 11:18:14.393939   33971 filesync.go:126] Scanning /home/jenkins/minikube-integration/18711-7633/.minikube/addons for local assets ...
	I0422 11:18:14.394020   33971 filesync.go:126] Scanning /home/jenkins/minikube-integration/18711-7633/.minikube/files for local assets ...
	I0422 11:18:14.394117   33971 filesync.go:149] local asset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> 149452.pem in /etc/ssl/certs
	I0422 11:18:14.394130   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> /etc/ssl/certs/149452.pem
	I0422 11:18:14.394277   33971 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 11:18:14.405520   33971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem --> /etc/ssl/certs/149452.pem (1708 bytes)
	I0422 11:18:14.433983   33971 start.go:296] duration metric: took 131.157052ms for postStartSetup
	I0422 11:18:14.434029   33971 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:18:14.434327   33971 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0422 11:18:14.434359   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:18:14.437083   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:18:14.437505   33971 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:18:14.437528   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:18:14.437668   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:18:14.437865   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:18:14.438028   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:18:14.438227   33971 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	W0422 11:18:14.519785   33971 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0422 11:18:14.519805   33971 fix.go:56] duration metric: took 1m31.941298972s for fixHost
	I0422 11:18:14.519829   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:18:14.522443   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:18:14.522780   33971 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:18:14.522807   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:18:14.522914   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:18:14.523197   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:18:14.523397   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:18:14.523599   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:18:14.523828   33971 main.go:141] libmachine: Using SSH client type: native
	I0422 11:18:14.524030   33971 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0422 11:18:14.524044   33971 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 11:18:14.626381   33971 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713784694.594145142
	
	I0422 11:18:14.626407   33971 fix.go:216] guest clock: 1713784694.594145142
	I0422 11:18:14.626418   33971 fix.go:229] Guest: 2024-04-22 11:18:14.594145142 +0000 UTC Remote: 2024-04-22 11:18:14.519813701 +0000 UTC m=+92.082745768 (delta=74.331441ms)
	I0422 11:18:14.626443   33971 fix.go:200] guest clock delta is within tolerance: 74.331441ms
	I0422 11:18:14.626448   33971 start.go:83] releasing machines lock for "ha-821265", held for 1m32.047956729s
	I0422 11:18:14.626469   33971 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:18:14.626768   33971 main.go:141] libmachine: (ha-821265) Calling .GetIP
	I0422 11:18:14.629489   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:18:14.629939   33971 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:18:14.629963   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:18:14.630136   33971 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:18:14.630646   33971 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:18:14.630813   33971 main.go:141] libmachine: (ha-821265) Calling .DriverName
	I0422 11:18:14.630894   33971 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 11:18:14.630946   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:18:14.631020   33971 ssh_runner.go:195] Run: cat /version.json
	I0422 11:18:14.631050   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHHostname
	I0422 11:18:14.633208   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:18:14.633551   33971 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:18:14.633587   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:18:14.633604   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:18:14.633732   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:18:14.633879   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:18:14.634024   33971 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:18:14.634030   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:18:14.634046   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:18:14.634165   33971 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:18:14.634200   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHPort
	I0422 11:18:14.634328   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHKeyPath
	I0422 11:18:14.634465   33971 main.go:141] libmachine: (ha-821265) Calling .GetSSHUsername
	I0422 11:18:14.634628   33971 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/ha-821265/id_rsa Username:docker}
	I0422 11:18:14.710759   33971 ssh_runner.go:195] Run: systemctl --version
	I0422 11:18:14.742872   33971 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 11:18:14.916988   33971 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 11:18:14.924012   33971 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 11:18:14.924080   33971 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 11:18:14.935584   33971 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0422 11:18:14.935610   33971 start.go:494] detecting cgroup driver to use...
	I0422 11:18:14.935680   33971 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 11:18:14.954977   33971 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 11:18:14.970144   33971 docker.go:217] disabling cri-docker service (if available) ...
	I0422 11:18:14.970210   33971 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 11:18:14.985826   33971 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 11:18:15.001395   33971 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 11:18:15.160016   33971 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 11:18:15.315939   33971 docker.go:233] disabling docker service ...
	I0422 11:18:15.316014   33971 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 11:18:15.334316   33971 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 11:18:15.349873   33971 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 11:18:15.508685   33971 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 11:18:15.675710   33971 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 11:18:15.692132   33971 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 11:18:15.714162   33971 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 11:18:15.714238   33971 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:18:15.728130   33971 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 11:18:15.728185   33971 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:18:15.740896   33971 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:18:15.753390   33971 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:18:15.765705   33971 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 11:18:15.778230   33971 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:18:15.790526   33971 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:18:15.802999   33971 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:18:15.816471   33971 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 11:18:15.827654   33971 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 11:18:15.838665   33971 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 11:18:15.984506   33971 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 11:18:16.534017   33971 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 11:18:16.534105   33971 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 11:18:16.539635   33971 start.go:562] Will wait 60s for crictl version
	I0422 11:18:16.539698   33971 ssh_runner.go:195] Run: which crictl
	I0422 11:18:16.544033   33971 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 11:18:16.590935   33971 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 11:18:16.591011   33971 ssh_runner.go:195] Run: crio --version
	I0422 11:18:16.625412   33971 ssh_runner.go:195] Run: crio --version
	I0422 11:18:16.660489   33971 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 11:18:16.661937   33971 main.go:141] libmachine: (ha-821265) Calling .GetIP
	I0422 11:18:16.664423   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:18:16.664733   33971 main.go:141] libmachine: (ha-821265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:f6:ad", ip: ""} in network mk-ha-821265: {Iface:virbr1 ExpiryTime:2024-04-22 12:06:52 +0000 UTC Type:0 Mac:52:54:00:17:f6:ad Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-821265 Clientid:01:52:54:00:17:f6:ad}
	I0422 11:18:16.664759   33971 main.go:141] libmachine: (ha-821265) DBG | domain ha-821265 has defined IP address 192.168.39.150 and MAC address 52:54:00:17:f6:ad in network mk-ha-821265
	I0422 11:18:16.664938   33971 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0422 11:18:16.670314   33971 kubeadm.go:877] updating cluster {Name:ha-821265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-821265 Namespace:
default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.252 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvis
or:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 11:18:16.670456   33971 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 11:18:16.670506   33971 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 11:18:16.717049   33971 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 11:18:16.717074   33971 crio.go:433] Images already preloaded, skipping extraction
	I0422 11:18:16.717119   33971 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 11:18:16.755351   33971 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 11:18:16.755370   33971 cache_images.go:84] Images are preloaded, skipping loading
	I0422 11:18:16.755378   33971 kubeadm.go:928] updating node { 192.168.39.150 8443 v1.30.0 crio true true} ...
	I0422 11:18:16.755497   33971 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-821265 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-821265 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 11:18:16.755584   33971 ssh_runner.go:195] Run: crio config
	I0422 11:18:16.809629   33971 cni.go:84] Creating CNI manager for ""
	I0422 11:18:16.809649   33971 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0422 11:18:16.809661   33971 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 11:18:16.809680   33971 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.150 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-821265 NodeName:ha-821265 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 11:18:16.809809   33971 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.150
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-821265"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.150
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.150"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 11:18:16.809831   33971 kube-vip.go:111] generating kube-vip config ...
	I0422 11:18:16.809879   33971 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0422 11:18:16.823240   33971 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0422 11:18:16.823376   33971 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0422 11:18:16.823439   33971 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 11:18:16.834405   33971 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 11:18:16.834466   33971 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0422 11:18:16.846033   33971 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0422 11:18:16.867185   33971 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 11:18:16.886124   33971 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0422 11:18:16.904301   33971 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0422 11:18:16.922003   33971 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0422 11:18:16.927344   33971 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 11:18:17.079485   33971 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 11:18:17.095020   33971 certs.go:68] Setting up /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265 for IP: 192.168.39.150
	I0422 11:18:17.095042   33971 certs.go:194] generating shared ca certs ...
	I0422 11:18:17.095056   33971 certs.go:226] acquiring lock for ca certs: {Name:mk0b77082b88c771d0b00be5267ca31dfee6f85a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:18:17.095195   33971 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key
	I0422 11:18:17.095232   33971 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key
	I0422 11:18:17.095248   33971 certs.go:256] generating profile certs ...
	I0422 11:18:17.095322   33971 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/client.key
	I0422 11:18:17.095347   33971 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key.c2a57ae6
	I0422 11:18:17.095362   33971 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt.c2a57ae6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.150 192.168.39.39 192.168.39.95 192.168.39.254]
	I0422 11:18:17.297368   33971 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt.c2a57ae6 ...
	I0422 11:18:17.297397   33971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt.c2a57ae6: {Name:mk329652d53ceaf163cc9215e6e3102215ab0232 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:18:17.297562   33971 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key.c2a57ae6 ...
	I0422 11:18:17.297573   33971 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key.c2a57ae6: {Name:mkd9033c2a3f5e2f4d691d0dc3d49c9b8162a362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:18:17.297643   33971 certs.go:381] copying /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt.c2a57ae6 -> /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt
	I0422 11:18:17.297775   33971 certs.go:385] copying /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key.c2a57ae6 -> /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key
	I0422 11:18:17.297911   33971 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.key
	I0422 11:18:17.297930   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0422 11:18:17.297942   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0422 11:18:17.297955   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0422 11:18:17.297968   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0422 11:18:17.297980   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0422 11:18:17.297991   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0422 11:18:17.298003   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0422 11:18:17.298015   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0422 11:18:17.298061   33971 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem (1338 bytes)
	W0422 11:18:17.298092   33971 certs.go:480] ignoring /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945_empty.pem, impossibly tiny 0 bytes
	I0422 11:18:17.298101   33971 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem (1679 bytes)
	I0422 11:18:17.298122   33971 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem (1078 bytes)
	I0422 11:18:17.298142   33971 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem (1123 bytes)
	I0422 11:18:17.298163   33971 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem (1679 bytes)
	I0422 11:18:17.298200   33971 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem (1708 bytes)
	I0422 11:18:17.298235   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> /usr/share/ca-certificates/149452.pem
	I0422 11:18:17.298248   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:18:17.298267   33971 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem -> /usr/share/ca-certificates/14945.pem
	I0422 11:18:17.298840   33971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 11:18:17.399398   33971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 11:18:17.468177   33971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 11:18:17.514170   33971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0422 11:18:17.559462   33971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0422 11:18:17.587362   33971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0422 11:18:17.616099   33971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 11:18:17.657815   33971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/ha-821265/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 11:18:17.684678   33971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem --> /usr/share/ca-certificates/149452.pem (1708 bytes)
	I0422 11:18:17.710672   33971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 11:18:17.739306   33971 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem --> /usr/share/ca-certificates/14945.pem (1338 bytes)
	I0422 11:18:17.766073   33971 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 11:18:17.784112   33971 ssh_runner.go:195] Run: openssl version
	I0422 11:18:17.790510   33971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149452.pem && ln -fs /usr/share/ca-certificates/149452.pem /etc/ssl/certs/149452.pem"
	I0422 11:18:17.802712   33971 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149452.pem
	I0422 11:18:17.807704   33971 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 10:51 /usr/share/ca-certificates/149452.pem
	I0422 11:18:17.807762   33971 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149452.pem
	I0422 11:18:17.814073   33971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149452.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 11:18:17.825154   33971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 11:18:17.836933   33971 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:18:17.841859   33971 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:18:17.841938   33971 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:18:17.848643   33971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 11:18:17.859376   33971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14945.pem && ln -fs /usr/share/ca-certificates/14945.pem /etc/ssl/certs/14945.pem"
	I0422 11:18:17.870936   33971 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14945.pem
	I0422 11:18:17.875705   33971 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 10:51 /usr/share/ca-certificates/14945.pem
	I0422 11:18:17.875761   33971 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14945.pem
	I0422 11:18:17.881961   33971 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14945.pem /etc/ssl/certs/51391683.0"
	I0422 11:18:17.892055   33971 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 11:18:17.896890   33971 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 11:18:17.903057   33971 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 11:18:17.909049   33971 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 11:18:17.915315   33971 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 11:18:17.921555   33971 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 11:18:17.927605   33971 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 11:18:17.933764   33971 kubeadm.go:391] StartCluster: {Name:ha-821265 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-821265 Namespace:def
ault APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.39 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.252 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:
false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 11:18:17.933890   33971 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 11:18:17.933926   33971 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 11:18:17.973785   33971 cri.go:89] found id: "16ef56225fa557ba27676ea985b488c3ee74c6c8596475b369b680ef8452686c"
	I0422 11:18:17.973806   33971 cri.go:89] found id: "c35de5462c21abea81ffc8d36f5be3ac560f53ea35d05d46cef598052731c89e"
	I0422 11:18:17.973810   33971 cri.go:89] found id: "38fd57ab261cd8c0d18f36cf8e96372b4bc8bd7a5e3a2fecb4c1e18f64b434a9"
	I0422 11:18:17.973813   33971 cri.go:89] found id: "1998bef851f9a842f606af6c4dfadb36bac1aecddb6b3799e3f13edb7f1acf58"
	I0422 11:18:17.973816   33971 cri.go:89] found id: "03c93b733e9d824b355dd41ee07faefa7e1f8b2a4f452bb053f1a9edd8d4106f"
	I0422 11:18:17.973819   33971 cri.go:89] found id: "28dbe3373b660061d706be023eb9515318a583f0f1fb735faf35e2fffc13f391"
	I0422 11:18:17.973821   33971 cri.go:89] found id: "609e2855f754ce317a2cad749d16f773349ffc248725fb9417b473c9be8df139"
	I0422 11:18:17.973824   33971 cri.go:89] found id: "1f43ea569f86c6da0221ad11b84975f3fab68f9999821a395b05d1afb49f5269"
	I0422 11:18:17.973826   33971 cri.go:89] found id: "a26ec191f8bcbef49468ef3d9b903de2da840c90478ee97540859b8f37f581f1"
	I0422 11:18:17.973834   33971 cri.go:89] found id: "2b3935bd9c893d367d1a96bbfe83c0b1d50125bfe3272fe02c29b282d1d35de5"
	I0422 11:18:17.973838   33971 cri.go:89] found id: "652741477fa90fca19fc111b1191a6acd0e2edcee141e389e5fd84f6018ec38e"
	I0422 11:18:17.973840   33971 cri.go:89] found id: "7cbf52d94248bdbe7ca0e2622c441a457f4747f2d8e8969d25f7b6e629e1b566"
	I0422 11:18:17.973843   33971 cri.go:89] found id: "ba49f85435f20c5e5f43a285a09c2344ab0eb98efa1efcef40308fec44a77803"
	I0422 11:18:17.973845   33971 cri.go:89] found id: ""
	I0422 11:18:17.973882   33971 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 22 11:23:28 ha-821265 crio[3844]: time="2024-04-22 11:23:28.096708125Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713785008096681894,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee3af80a-ed79-4722-9523-51264bfcb006 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:23:28 ha-821265 crio[3844]: time="2024-04-22 11:23:28.097629822Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b727848d-28da-4e58-85e0-eb96a8d551ec name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:23:28 ha-821265 crio[3844]: time="2024-04-22 11:23:28.097875079Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b727848d-28da-4e58-85e0-eb96a8d551ec name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:23:28 ha-821265 crio[3844]: time="2024-04-22 11:23:28.099627976Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdbfadb4ed8d19096021578583930566b38bffb62ac75a0fa4bfa1854bc51c07,PodSandboxId:5718ac2f010731d932225775ad8b53843e8a598c210cccfd78b15fcdc08bebc4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713784773618457331,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qbq9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751a17f-e26b-4ba8-81ce-077103c0aa1c,},Annotations:map[string]string{io.kubernetes.container.hash: f5b26a55,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd7aebece9906bc4053b906e1ecb267481a893fb7ff00bed5de74ed1cfa54000,PodSandboxId:d7028b8f29863bad3892327487b14f468d4c9cb4a5469a9b4b34ae91148f52c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713784762600202240,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b44da93-f3fa-49b7-a701-5ab7a430374f,},Annotations:map[string]string{io.kubernetes.container.hash: 97b65705,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06935d6ef805f3d3cf6a05bcb64dc081d72aae88db019d142b68750d3cf1c867,PodSandboxId:c83e50830d3f4db878e319b1cce7bf9160c76a605bbddbb15186b52a363c346f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713784741599078298,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7e7ddac3eb004675c7add1d1e064dc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f9805a7cceb20ebe5dd98c40f4989b29929823f101fc5fa3e52ce922be823cf,PodSandboxId:3bce0f832a4b7c43c1a8d39bf39eebbaf7ef2c41958b8711e7cf2f48c892aa3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713784734596266351,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2b58b303a812e19616ac42b0b60aae,},Annotations:map[string]string{io.kubernetes.container.hash: 4b37b5d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef4332ea1f047dc3cfb415e4e3f6e85cf33465b2ef9f18f7951278d2479ca93,PodSandboxId:be2b7bbc0a977b699d86984cd22eb583aada9b085c8a1907e359c7b60a8b31c4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713784732974770708,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b4r5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1670d513-9071-4ee0-ae1b-7600c98019b8,},Annotations:map[string]string{io.kubernetes.container.hash: 5113ac6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e84fb7087c9854ea4137c3862fe9dc600a9d3f90b3fbc9522a51e51e681a08e1,PodSandboxId:0fdc24e0bdf40a907df680923387fa4412d3fe58200f36e9c854f25f4915fb23,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713784715707288035,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fd9482ff289c9d12747129b48272b7a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:d3cbf7c282792930e1df477971a4bd28b78cb49c295f6e0ac2c8a454824de5d2,PodSandboxId:d7028b8f29863bad3892327487b14f468d4c9cb4a5469a9b4b34ae91148f52c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713784699818708388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b44da93-f3fa-49b7-a701-5ab7a430374f,},Annotations:map[string]string{io.kubernetes.container.hash: 97b65705,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:45ee3a04fea005538c47381509fdf3d9e53cfe0bb8e8e14149e912ea8a67cfd8,PodSandboxId:e92faf278f88298e80a1a96b654d4a23ef26ee537f98d7b37fa7e8ecaaaf94c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713784699723405687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7r9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a4f7fc-5ce0-4d77-b30f-9d39cded457c,},Annotations:map[string]string{io.kubernetes.container.hash: bf585fa1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aba046555
40d31d9ebf03299c794d1c3c3623ea9026908acaac007ac12a740b4,PodSandboxId:2800fa5fa268e57b67df341230522c1d69a07af4049cd86a97df3ceb3abff22c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713784700024922806,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ft2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e14815-b8e9-4b60-9b2c-c7d86cccb594,},Annotations:map[string]string{io.kubernetes.container.hash: f5070328,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:086db7b19ea3bed15f2bf46fc53e6befb389e2aa6d163eb0290b45841b20a974,PodSandboxId:c1f81420600b023ee32a3073be284b93a2cbc2919d2092900ade645f006f514f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713784699657509914,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d47cc377f7ae04e53a8145721f1411a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb1b67b39ae4f69aabbcde5efda711e2ad29a9f6926b4f1e78b54d9fcd92ed97,PodSandboxId:3bce0f832a4b7c43c1a8d39bf39eebbaf7ef2c41958b8711e7cf2f48c892aa3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713784699622280749,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2b58b303a812e19616ac42b0b60aae,},Annotations:map[string]string{io.kubernetes.container.hash: 4b37b5d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d27ec30a0ad7913b7b1f7c2670ac92c8e3c79a52b67ccae30cf41067898e375c,PodSandboxId:e1d9c3b2c209a79ca019d0c6b4e1e1d23a1390aed1ee7374d58e639804d6cc5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713784699637480543,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68bde0d14316a4c3a901fddeacfd54a,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9fd198,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594b38d4c919f0bd4386634ffa22d99b282bd1d1e2d832d8b64e67b021e866d4,PodSandboxId:c83e50830d3f4db878e319b1cce7bf9160c76a605bbddbb15186b52a363c346f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713784699510289451,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7e7ddac3eb004675c7add1d1e064dc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b77e388cf4a08ba51bd96cd087b8b7ef6a23d957a16f1deb5ca55943ffe9f4,PodSandboxId:5718ac2f010731d932225775ad8b53843e8a598c210cccfd78b15fcdc08bebc4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713784699059530582,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qbq9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751a17f-e26b-4ba8-81ce-077103c0aa1c,},Annotations:map[string]string{io.kubernetes.container.hash: f5b26a55,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bec52a480e1ba363d05d69d01be4a0cee8746d920f0865e277f9c21bc87cbe3,PodSandboxId:97870b1b56dc1a85e2557bc8e33b02398db33a96e36cc229f4632c06816d7196,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713784698992367850,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ht7jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c404a830-ddce-4c49-9e54-05d45871b4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1d0fb98b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\"
:9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9e45e23c690bb79c7fd65070b3188b60b1c0041e0955b10386851453d93e8c2,PodSandboxId:82d54024bc68a08eee3c2cc0b18e7fb33cd099191b5f7459c47109f97a3f7592,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713784211175334269,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b4r5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1670d513-9071-4ee0-ae1b-7600c98019b8,},Annotations:map[string]string{io.kubernete
s.container.hash: 5113ac6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28dbe3373b660061d706be023eb9515318a583f0f1fb735faf35e2fffc13f391,PodSandboxId:126db08ea55aca85342e8b7f3c944b3e420d06d55410be6b5b8c83ed8aaea027,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713784060436950826,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ht7jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c404a830-ddce-4c49-9e54-05d45871b4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1d0fb98b,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:609e2855f754ce317a2cad749d16f773349ffc248725fb9417b473c9be8df139,PodSandboxId:84aaf42f76a8a064784395ee92d65a6be9d6ddc96fb911530ab4ab1c12faefa1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713784060349985507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-ft2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e14815-b8e9-4b60-9b2c-c7d86cccb594,},Annotations:map[string]string{io.kubernetes.container.hash: f5070328,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f43ea569f86c6da0221ad11b84975f3fab68f9999821a395b05d1afb49f5269,PodSandboxId:626e64c737b2d764452e83cdf097ca6fc3248d79c58ccd5a488c8986fdfb101d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713784057949970963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7r9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a4f7fc-5ce0-4d77-b30f-9d39cded457c,},Annotations:map[string]string{io.kubernetes.container.hash: bf585fa1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3935bd9c893d367d1a96bbfe83c0b1d50125bfe3272fe02c29b282d1d35de5,PodSandboxId:68a372e9f954bec85212f490bbd41d4da504f0947a8f1e065b8dc63d7cf5db88,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8
b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713784035610721822,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d47cc377f7ae04e53a8145721f1411a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49f85435f20c5e5f43a285a09c2344ab0eb98efa1efcef40308fec44a77803,PodSandboxId:f773251009c17f15bd2065d44e9976fe2579a48750872b77f082f3b37a1a5747,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1713784035389376110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68bde0d14316a4c3a901fddeacfd54a,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9fd198,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b727848d-28da-4e58-85e0-eb96a8d551ec name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:23:28 ha-821265 crio[3844]: time="2024-04-22 11:23:28.167267703Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dffe2f2c-9091-4599-a1bf-0e54ebc4bbe1 name=/runtime.v1.RuntimeService/Version
	Apr 22 11:23:28 ha-821265 crio[3844]: time="2024-04-22 11:23:28.167380340Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dffe2f2c-9091-4599-a1bf-0e54ebc4bbe1 name=/runtime.v1.RuntimeService/Version
	Apr 22 11:23:28 ha-821265 crio[3844]: time="2024-04-22 11:23:28.168780008Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=29ccd3e5-6ce7-472c-8941-3f1de21807bc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:23:28 ha-821265 crio[3844]: time="2024-04-22 11:23:28.169368564Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713785008169151439,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=29ccd3e5-6ce7-472c-8941-3f1de21807bc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:23:28 ha-821265 crio[3844]: time="2024-04-22 11:23:28.170147268Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c61ecc35-4805-407c-b245-72750ee722e9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:23:28 ha-821265 crio[3844]: time="2024-04-22 11:23:28.170203344Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c61ecc35-4805-407c-b245-72750ee722e9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:23:28 ha-821265 crio[3844]: time="2024-04-22 11:23:28.170892119Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdbfadb4ed8d19096021578583930566b38bffb62ac75a0fa4bfa1854bc51c07,PodSandboxId:5718ac2f010731d932225775ad8b53843e8a598c210cccfd78b15fcdc08bebc4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713784773618457331,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qbq9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751a17f-e26b-4ba8-81ce-077103c0aa1c,},Annotations:map[string]string{io.kubernetes.container.hash: f5b26a55,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd7aebece9906bc4053b906e1ecb267481a893fb7ff00bed5de74ed1cfa54000,PodSandboxId:d7028b8f29863bad3892327487b14f468d4c9cb4a5469a9b4b34ae91148f52c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713784762600202240,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b44da93-f3fa-49b7-a701-5ab7a430374f,},Annotations:map[string]string{io.kubernetes.container.hash: 97b65705,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06935d6ef805f3d3cf6a05bcb64dc081d72aae88db019d142b68750d3cf1c867,PodSandboxId:c83e50830d3f4db878e319b1cce7bf9160c76a605bbddbb15186b52a363c346f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713784741599078298,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7e7ddac3eb004675c7add1d1e064dc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f9805a7cceb20ebe5dd98c40f4989b29929823f101fc5fa3e52ce922be823cf,PodSandboxId:3bce0f832a4b7c43c1a8d39bf39eebbaf7ef2c41958b8711e7cf2f48c892aa3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713784734596266351,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2b58b303a812e19616ac42b0b60aae,},Annotations:map[string]string{io.kubernetes.container.hash: 4b37b5d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef4332ea1f047dc3cfb415e4e3f6e85cf33465b2ef9f18f7951278d2479ca93,PodSandboxId:be2b7bbc0a977b699d86984cd22eb583aada9b085c8a1907e359c7b60a8b31c4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713784732974770708,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b4r5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1670d513-9071-4ee0-ae1b-7600c98019b8,},Annotations:map[string]string{io.kubernetes.container.hash: 5113ac6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e84fb7087c9854ea4137c3862fe9dc600a9d3f90b3fbc9522a51e51e681a08e1,PodSandboxId:0fdc24e0bdf40a907df680923387fa4412d3fe58200f36e9c854f25f4915fb23,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713784715707288035,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fd9482ff289c9d12747129b48272b7a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:d3cbf7c282792930e1df477971a4bd28b78cb49c295f6e0ac2c8a454824de5d2,PodSandboxId:d7028b8f29863bad3892327487b14f468d4c9cb4a5469a9b4b34ae91148f52c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713784699818708388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b44da93-f3fa-49b7-a701-5ab7a430374f,},Annotations:map[string]string{io.kubernetes.container.hash: 97b65705,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:45ee3a04fea005538c47381509fdf3d9e53cfe0bb8e8e14149e912ea8a67cfd8,PodSandboxId:e92faf278f88298e80a1a96b654d4a23ef26ee537f98d7b37fa7e8ecaaaf94c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713784699723405687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7r9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a4f7fc-5ce0-4d77-b30f-9d39cded457c,},Annotations:map[string]string{io.kubernetes.container.hash: bf585fa1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aba046555
40d31d9ebf03299c794d1c3c3623ea9026908acaac007ac12a740b4,PodSandboxId:2800fa5fa268e57b67df341230522c1d69a07af4049cd86a97df3ceb3abff22c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713784700024922806,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ft2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e14815-b8e9-4b60-9b2c-c7d86cccb594,},Annotations:map[string]string{io.kubernetes.container.hash: f5070328,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:086db7b19ea3bed15f2bf46fc53e6befb389e2aa6d163eb0290b45841b20a974,PodSandboxId:c1f81420600b023ee32a3073be284b93a2cbc2919d2092900ade645f006f514f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713784699657509914,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d47cc377f7ae04e53a8145721f1411a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb1b67b39ae4f69aabbcde5efda711e2ad29a9f6926b4f1e78b54d9fcd92ed97,PodSandboxId:3bce0f832a4b7c43c1a8d39bf39eebbaf7ef2c41958b8711e7cf2f48c892aa3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713784699622280749,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2b58b303a812e19616ac42b0b60aae,},Annotations:map[string]string{io.kubernetes.container.hash: 4b37b5d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d27ec30a0ad7913b7b1f7c2670ac92c8e3c79a52b67ccae30cf41067898e375c,PodSandboxId:e1d9c3b2c209a79ca019d0c6b4e1e1d23a1390aed1ee7374d58e639804d6cc5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713784699637480543,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68bde0d14316a4c3a901fddeacfd54a,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9fd198,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594b38d4c919f0bd4386634ffa22d99b282bd1d1e2d832d8b64e67b021e866d4,PodSandboxId:c83e50830d3f4db878e319b1cce7bf9160c76a605bbddbb15186b52a363c346f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713784699510289451,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7e7ddac3eb004675c7add1d1e064dc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b77e388cf4a08ba51bd96cd087b8b7ef6a23d957a16f1deb5ca55943ffe9f4,PodSandboxId:5718ac2f010731d932225775ad8b53843e8a598c210cccfd78b15fcdc08bebc4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713784699059530582,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qbq9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751a17f-e26b-4ba8-81ce-077103c0aa1c,},Annotations:map[string]string{io.kubernetes.container.hash: f5b26a55,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bec52a480e1ba363d05d69d01be4a0cee8746d920f0865e277f9c21bc87cbe3,PodSandboxId:97870b1b56dc1a85e2557bc8e33b02398db33a96e36cc229f4632c06816d7196,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713784698992367850,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ht7jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c404a830-ddce-4c49-9e54-05d45871b4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1d0fb98b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\"
:9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9e45e23c690bb79c7fd65070b3188b60b1c0041e0955b10386851453d93e8c2,PodSandboxId:82d54024bc68a08eee3c2cc0b18e7fb33cd099191b5f7459c47109f97a3f7592,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713784211175334269,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b4r5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1670d513-9071-4ee0-ae1b-7600c98019b8,},Annotations:map[string]string{io.kubernete
s.container.hash: 5113ac6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28dbe3373b660061d706be023eb9515318a583f0f1fb735faf35e2fffc13f391,PodSandboxId:126db08ea55aca85342e8b7f3c944b3e420d06d55410be6b5b8c83ed8aaea027,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713784060436950826,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ht7jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c404a830-ddce-4c49-9e54-05d45871b4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1d0fb98b,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:609e2855f754ce317a2cad749d16f773349ffc248725fb9417b473c9be8df139,PodSandboxId:84aaf42f76a8a064784395ee92d65a6be9d6ddc96fb911530ab4ab1c12faefa1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713784060349985507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-ft2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e14815-b8e9-4b60-9b2c-c7d86cccb594,},Annotations:map[string]string{io.kubernetes.container.hash: f5070328,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f43ea569f86c6da0221ad11b84975f3fab68f9999821a395b05d1afb49f5269,PodSandboxId:626e64c737b2d764452e83cdf097ca6fc3248d79c58ccd5a488c8986fdfb101d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713784057949970963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7r9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a4f7fc-5ce0-4d77-b30f-9d39cded457c,},Annotations:map[string]string{io.kubernetes.container.hash: bf585fa1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3935bd9c893d367d1a96bbfe83c0b1d50125bfe3272fe02c29b282d1d35de5,PodSandboxId:68a372e9f954bec85212f490bbd41d4da504f0947a8f1e065b8dc63d7cf5db88,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8
b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713784035610721822,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d47cc377f7ae04e53a8145721f1411a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49f85435f20c5e5f43a285a09c2344ab0eb98efa1efcef40308fec44a77803,PodSandboxId:f773251009c17f15bd2065d44e9976fe2579a48750872b77f082f3b37a1a5747,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1713784035389376110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68bde0d14316a4c3a901fddeacfd54a,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9fd198,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c61ecc35-4805-407c-b245-72750ee722e9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:23:28 ha-821265 crio[3844]: time="2024-04-22 11:23:28.217962509Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6bbc9e4b-6c55-45e5-b3c3-8b8b7333de3a name=/runtime.v1.RuntimeService/Version
	Apr 22 11:23:28 ha-821265 crio[3844]: time="2024-04-22 11:23:28.218059881Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6bbc9e4b-6c55-45e5-b3c3-8b8b7333de3a name=/runtime.v1.RuntimeService/Version
	Apr 22 11:23:28 ha-821265 crio[3844]: time="2024-04-22 11:23:28.219784841Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c8530aaa-4cb6-4863-9460-f2cd319d1e51 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:23:28 ha-821265 crio[3844]: time="2024-04-22 11:23:28.221120106Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713785008220994129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8530aaa-4cb6-4863-9460-f2cd319d1e51 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:23:28 ha-821265 crio[3844]: time="2024-04-22 11:23:28.222368190Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe26f567-829b-4588-8f7a-e8d49da62d84 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:23:28 ha-821265 crio[3844]: time="2024-04-22 11:23:28.222450247Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe26f567-829b-4588-8f7a-e8d49da62d84 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:23:28 ha-821265 crio[3844]: time="2024-04-22 11:23:28.223244376Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdbfadb4ed8d19096021578583930566b38bffb62ac75a0fa4bfa1854bc51c07,PodSandboxId:5718ac2f010731d932225775ad8b53843e8a598c210cccfd78b15fcdc08bebc4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713784773618457331,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qbq9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751a17f-e26b-4ba8-81ce-077103c0aa1c,},Annotations:map[string]string{io.kubernetes.container.hash: f5b26a55,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd7aebece9906bc4053b906e1ecb267481a893fb7ff00bed5de74ed1cfa54000,PodSandboxId:d7028b8f29863bad3892327487b14f468d4c9cb4a5469a9b4b34ae91148f52c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713784762600202240,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b44da93-f3fa-49b7-a701-5ab7a430374f,},Annotations:map[string]string{io.kubernetes.container.hash: 97b65705,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06935d6ef805f3d3cf6a05bcb64dc081d72aae88db019d142b68750d3cf1c867,PodSandboxId:c83e50830d3f4db878e319b1cce7bf9160c76a605bbddbb15186b52a363c346f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713784741599078298,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7e7ddac3eb004675c7add1d1e064dc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f9805a7cceb20ebe5dd98c40f4989b29929823f101fc5fa3e52ce922be823cf,PodSandboxId:3bce0f832a4b7c43c1a8d39bf39eebbaf7ef2c41958b8711e7cf2f48c892aa3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713784734596266351,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2b58b303a812e19616ac42b0b60aae,},Annotations:map[string]string{io.kubernetes.container.hash: 4b37b5d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef4332ea1f047dc3cfb415e4e3f6e85cf33465b2ef9f18f7951278d2479ca93,PodSandboxId:be2b7bbc0a977b699d86984cd22eb583aada9b085c8a1907e359c7b60a8b31c4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713784732974770708,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b4r5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1670d513-9071-4ee0-ae1b-7600c98019b8,},Annotations:map[string]string{io.kubernetes.container.hash: 5113ac6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e84fb7087c9854ea4137c3862fe9dc600a9d3f90b3fbc9522a51e51e681a08e1,PodSandboxId:0fdc24e0bdf40a907df680923387fa4412d3fe58200f36e9c854f25f4915fb23,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713784715707288035,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fd9482ff289c9d12747129b48272b7a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:d3cbf7c282792930e1df477971a4bd28b78cb49c295f6e0ac2c8a454824de5d2,PodSandboxId:d7028b8f29863bad3892327487b14f468d4c9cb4a5469a9b4b34ae91148f52c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713784699818708388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b44da93-f3fa-49b7-a701-5ab7a430374f,},Annotations:map[string]string{io.kubernetes.container.hash: 97b65705,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:45ee3a04fea005538c47381509fdf3d9e53cfe0bb8e8e14149e912ea8a67cfd8,PodSandboxId:e92faf278f88298e80a1a96b654d4a23ef26ee537f98d7b37fa7e8ecaaaf94c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713784699723405687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7r9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a4f7fc-5ce0-4d77-b30f-9d39cded457c,},Annotations:map[string]string{io.kubernetes.container.hash: bf585fa1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aba046555
40d31d9ebf03299c794d1c3c3623ea9026908acaac007ac12a740b4,PodSandboxId:2800fa5fa268e57b67df341230522c1d69a07af4049cd86a97df3ceb3abff22c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713784700024922806,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ft2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e14815-b8e9-4b60-9b2c-c7d86cccb594,},Annotations:map[string]string{io.kubernetes.container.hash: f5070328,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:086db7b19ea3bed15f2bf46fc53e6befb389e2aa6d163eb0290b45841b20a974,PodSandboxId:c1f81420600b023ee32a3073be284b93a2cbc2919d2092900ade645f006f514f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713784699657509914,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d47cc377f7ae04e53a8145721f1411a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb1b67b39ae4f69aabbcde5efda711e2ad29a9f6926b4f1e78b54d9fcd92ed97,PodSandboxId:3bce0f832a4b7c43c1a8d39bf39eebbaf7ef2c41958b8711e7cf2f48c892aa3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713784699622280749,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2b58b303a812e19616ac42b0b60aae,},Annotations:map[string]string{io.kubernetes.container.hash: 4b37b5d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d27ec30a0ad7913b7b1f7c2670ac92c8e3c79a52b67ccae30cf41067898e375c,PodSandboxId:e1d9c3b2c209a79ca019d0c6b4e1e1d23a1390aed1ee7374d58e639804d6cc5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713784699637480543,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68bde0d14316a4c3a901fddeacfd54a,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9fd198,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594b38d4c919f0bd4386634ffa22d99b282bd1d1e2d832d8b64e67b021e866d4,PodSandboxId:c83e50830d3f4db878e319b1cce7bf9160c76a605bbddbb15186b52a363c346f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713784699510289451,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7e7ddac3eb004675c7add1d1e064dc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b77e388cf4a08ba51bd96cd087b8b7ef6a23d957a16f1deb5ca55943ffe9f4,PodSandboxId:5718ac2f010731d932225775ad8b53843e8a598c210cccfd78b15fcdc08bebc4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713784699059530582,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qbq9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751a17f-e26b-4ba8-81ce-077103c0aa1c,},Annotations:map[string]string{io.kubernetes.container.hash: f5b26a55,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bec52a480e1ba363d05d69d01be4a0cee8746d920f0865e277f9c21bc87cbe3,PodSandboxId:97870b1b56dc1a85e2557bc8e33b02398db33a96e36cc229f4632c06816d7196,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713784698992367850,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ht7jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c404a830-ddce-4c49-9e54-05d45871b4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1d0fb98b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\"
:9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9e45e23c690bb79c7fd65070b3188b60b1c0041e0955b10386851453d93e8c2,PodSandboxId:82d54024bc68a08eee3c2cc0b18e7fb33cd099191b5f7459c47109f97a3f7592,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713784211175334269,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b4r5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1670d513-9071-4ee0-ae1b-7600c98019b8,},Annotations:map[string]string{io.kubernete
s.container.hash: 5113ac6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28dbe3373b660061d706be023eb9515318a583f0f1fb735faf35e2fffc13f391,PodSandboxId:126db08ea55aca85342e8b7f3c944b3e420d06d55410be6b5b8c83ed8aaea027,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713784060436950826,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ht7jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c404a830-ddce-4c49-9e54-05d45871b4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1d0fb98b,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:609e2855f754ce317a2cad749d16f773349ffc248725fb9417b473c9be8df139,PodSandboxId:84aaf42f76a8a064784395ee92d65a6be9d6ddc96fb911530ab4ab1c12faefa1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713784060349985507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-ft2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e14815-b8e9-4b60-9b2c-c7d86cccb594,},Annotations:map[string]string{io.kubernetes.container.hash: f5070328,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f43ea569f86c6da0221ad11b84975f3fab68f9999821a395b05d1afb49f5269,PodSandboxId:626e64c737b2d764452e83cdf097ca6fc3248d79c58ccd5a488c8986fdfb101d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713784057949970963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7r9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a4f7fc-5ce0-4d77-b30f-9d39cded457c,},Annotations:map[string]string{io.kubernetes.container.hash: bf585fa1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3935bd9c893d367d1a96bbfe83c0b1d50125bfe3272fe02c29b282d1d35de5,PodSandboxId:68a372e9f954bec85212f490bbd41d4da504f0947a8f1e065b8dc63d7cf5db88,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8
b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713784035610721822,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d47cc377f7ae04e53a8145721f1411a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49f85435f20c5e5f43a285a09c2344ab0eb98efa1efcef40308fec44a77803,PodSandboxId:f773251009c17f15bd2065d44e9976fe2579a48750872b77f082f3b37a1a5747,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1713784035389376110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68bde0d14316a4c3a901fddeacfd54a,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9fd198,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe26f567-829b-4588-8f7a-e8d49da62d84 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:23:28 ha-821265 crio[3844]: time="2024-04-22 11:23:28.290975721Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e7423c99-30fa-4433-812f-830a2d68c409 name=/runtime.v1.RuntimeService/Version
	Apr 22 11:23:28 ha-821265 crio[3844]: time="2024-04-22 11:23:28.291046808Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e7423c99-30fa-4433-812f-830a2d68c409 name=/runtime.v1.RuntimeService/Version
	Apr 22 11:23:28 ha-821265 crio[3844]: time="2024-04-22 11:23:28.292127833Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ef7aadfa-6f61-4f50-a7eb-87dd0dd526fb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:23:28 ha-821265 crio[3844]: time="2024-04-22 11:23:28.293198242Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713785008293172646,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ef7aadfa-6f61-4f50-a7eb-87dd0dd526fb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:23:28 ha-821265 crio[3844]: time="2024-04-22 11:23:28.293823302Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6df3e443-dc98-4259-a831-9e8431c2b9b1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:23:28 ha-821265 crio[3844]: time="2024-04-22 11:23:28.293907917Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6df3e443-dc98-4259-a831-9e8431c2b9b1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:23:28 ha-821265 crio[3844]: time="2024-04-22 11:23:28.294304154Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdbfadb4ed8d19096021578583930566b38bffb62ac75a0fa4bfa1854bc51c07,PodSandboxId:5718ac2f010731d932225775ad8b53843e8a598c210cccfd78b15fcdc08bebc4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713784773618457331,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qbq9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751a17f-e26b-4ba8-81ce-077103c0aa1c,},Annotations:map[string]string{io.kubernetes.container.hash: f5b26a55,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd7aebece9906bc4053b906e1ecb267481a893fb7ff00bed5de74ed1cfa54000,PodSandboxId:d7028b8f29863bad3892327487b14f468d4c9cb4a5469a9b4b34ae91148f52c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713784762600202240,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b44da93-f3fa-49b7-a701-5ab7a430374f,},Annotations:map[string]string{io.kubernetes.container.hash: 97b65705,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06935d6ef805f3d3cf6a05bcb64dc081d72aae88db019d142b68750d3cf1c867,PodSandboxId:c83e50830d3f4db878e319b1cce7bf9160c76a605bbddbb15186b52a363c346f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713784741599078298,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7e7ddac3eb004675c7add1d1e064dc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f9805a7cceb20ebe5dd98c40f4989b29929823f101fc5fa3e52ce922be823cf,PodSandboxId:3bce0f832a4b7c43c1a8d39bf39eebbaf7ef2c41958b8711e7cf2f48c892aa3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713784734596266351,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2b58b303a812e19616ac42b0b60aae,},Annotations:map[string]string{io.kubernetes.container.hash: 4b37b5d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef4332ea1f047dc3cfb415e4e3f6e85cf33465b2ef9f18f7951278d2479ca93,PodSandboxId:be2b7bbc0a977b699d86984cd22eb583aada9b085c8a1907e359c7b60a8b31c4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713784732974770708,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b4r5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1670d513-9071-4ee0-ae1b-7600c98019b8,},Annotations:map[string]string{io.kubernetes.container.hash: 5113ac6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e84fb7087c9854ea4137c3862fe9dc600a9d3f90b3fbc9522a51e51e681a08e1,PodSandboxId:0fdc24e0bdf40a907df680923387fa4412d3fe58200f36e9c854f25f4915fb23,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713784715707288035,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fd9482ff289c9d12747129b48272b7a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:d3cbf7c282792930e1df477971a4bd28b78cb49c295f6e0ac2c8a454824de5d2,PodSandboxId:d7028b8f29863bad3892327487b14f468d4c9cb4a5469a9b4b34ae91148f52c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713784699818708388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b44da93-f3fa-49b7-a701-5ab7a430374f,},Annotations:map[string]string{io.kubernetes.container.hash: 97b65705,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:45ee3a04fea005538c47381509fdf3d9e53cfe0bb8e8e14149e912ea8a67cfd8,PodSandboxId:e92faf278f88298e80a1a96b654d4a23ef26ee537f98d7b37fa7e8ecaaaf94c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713784699723405687,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7r9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a4f7fc-5ce0-4d77-b30f-9d39cded457c,},Annotations:map[string]string{io.kubernetes.container.hash: bf585fa1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aba046555
40d31d9ebf03299c794d1c3c3623ea9026908acaac007ac12a740b4,PodSandboxId:2800fa5fa268e57b67df341230522c1d69a07af4049cd86a97df3ceb3abff22c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713784700024922806,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ft2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e14815-b8e9-4b60-9b2c-c7d86cccb594,},Annotations:map[string]string{io.kubernetes.container.hash: f5070328,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:086db7b19ea3bed15f2bf46fc53e6befb389e2aa6d163eb0290b45841b20a974,PodSandboxId:c1f81420600b023ee32a3073be284b93a2cbc2919d2092900ade645f006f514f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713784699657509914,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d47cc377f7ae04e53a8145721f1411a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb1b67b39ae4f69aabbcde5efda711e2ad29a9f6926b4f1e78b54d9fcd92ed97,PodSandboxId:3bce0f832a4b7c43c1a8d39bf39eebbaf7ef2c41958b8711e7cf2f48c892aa3e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713784699622280749,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b2b58b303a812e19616ac42b0b60aae,},Annotations:map[string]string{io.kubernetes.container.hash: 4b37b5d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d27ec30a0ad7913b7b1f7c2670ac92c8e3c79a52b67ccae30cf41067898e375c,PodSandboxId:e1d9c3b2c209a79ca019d0c6b4e1e1d23a1390aed1ee7374d58e639804d6cc5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713784699637480543,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68bde0d14316a4c3a901fddeacfd54a,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9fd198,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.ter
minationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594b38d4c919f0bd4386634ffa22d99b282bd1d1e2d832d8b64e67b021e866d4,PodSandboxId:c83e50830d3f4db878e319b1cce7bf9160c76a605bbddbb15186b52a363c346f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713784699510289451,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e7e7ddac3eb004675c7add1d1e064dc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b77e388cf4a08ba51bd96cd087b8b7ef6a23d957a16f1deb5ca55943ffe9f4,PodSandboxId:5718ac2f010731d932225775ad8b53843e8a598c210cccfd78b15fcdc08bebc4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713784699059530582,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-qbq9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751a17f-e26b-4ba8-81ce-077103c0aa1c,},Annotations:map[string]string{io.kubernetes.container.hash: f5b26a55,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bec52a480e1ba363d05d69d01be4a0cee8746d920f0865e277f9c21bc87cbe3,PodSandboxId:97870b1b56dc1a85e2557bc8e33b02398db33a96e36cc229f4632c06816d7196,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713784698992367850,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ht7jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c404a830-ddce-4c49-9e54-05d45871b4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1d0fb98b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\"
:9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9e45e23c690bb79c7fd65070b3188b60b1c0041e0955b10386851453d93e8c2,PodSandboxId:82d54024bc68a08eee3c2cc0b18e7fb33cd099191b5f7459c47109f97a3f7592,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713784211175334269,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b4r5w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1670d513-9071-4ee0-ae1b-7600c98019b8,},Annotations:map[string]string{io.kubernete
s.container.hash: 5113ac6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28dbe3373b660061d706be023eb9515318a583f0f1fb735faf35e2fffc13f391,PodSandboxId:126db08ea55aca85342e8b7f3c944b3e420d06d55410be6b5b8c83ed8aaea027,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713784060436950826,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ht7jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c404a830-ddce-4c49-9e54-05d45871b4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1d0fb98b,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:609e2855f754ce317a2cad749d16f773349ffc248725fb9417b473c9be8df139,PodSandboxId:84aaf42f76a8a064784395ee92d65a6be9d6ddc96fb911530ab4ab1c12faefa1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713784060349985507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-ft2jl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09e14815-b8e9-4b60-9b2c-c7d86cccb594,},Annotations:map[string]string{io.kubernetes.container.hash: f5070328,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f43ea569f86c6da0221ad11b84975f3fab68f9999821a395b05d1afb49f5269,PodSandboxId:626e64c737b2d764452e83cdf097ca6fc3248d79c58ccd5a488c8986fdfb101d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713784057949970963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w7r9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a4f7fc-5ce0-4d77-b30f-9d39cded457c,},Annotations:map[string]string{io.kubernetes.container.hash: bf585fa1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3935bd9c893d367d1a96bbfe83c0b1d50125bfe3272fe02c29b282d1d35de5,PodSandboxId:68a372e9f954bec85212f490bbd41d4da504f0947a8f1e065b8dc63d7cf5db88,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8
b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713784035610721822,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d47cc377f7ae04e53a8145721f1411a,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba49f85435f20c5e5f43a285a09c2344ab0eb98efa1efcef40308fec44a77803,PodSandboxId:f773251009c17f15bd2065d44e9976fe2579a48750872b77f082f3b37a1a5747,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1713784035389376110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-821265,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b68bde0d14316a4c3a901fddeacfd54a,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9fd198,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6df3e443-dc98-4259-a831-9e8431c2b9b1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bdbfadb4ed8d1       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               3                   5718ac2f01073       kindnet-qbq9z
	bd7aebece9906       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   d7028b8f29863       storage-provisioner
	06935d6ef805f       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      4 minutes ago       Running             kube-controller-manager   2                   c83e50830d3f4       kube-controller-manager-ha-821265
	2f9805a7cceb2       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      4 minutes ago       Running             kube-apiserver            3                   3bce0f832a4b7       kube-apiserver-ha-821265
	bef4332ea1f04       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   be2b7bbc0a977       busybox-fc5497c4f-b4r5w
	e84fb7087c985       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      4 minutes ago       Running             kube-vip                  0                   0fdc24e0bdf40       kube-vip-ha-821265
	aba04655540d3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   2800fa5fa268e       coredns-7db6d8ff4d-ft2jl
	d3cbf7c282792       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   d7028b8f29863       storage-provisioner
	45ee3a04fea00       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      5 minutes ago       Running             kube-proxy                1                   e92faf278f882       kube-proxy-w7r9d
	086db7b19ea3b       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      5 minutes ago       Running             kube-scheduler            1                   c1f81420600b0       kube-scheduler-ha-821265
	d27ec30a0ad79       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   e1d9c3b2c209a       etcd-ha-821265
	fb1b67b39ae4f       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      5 minutes ago       Exited              kube-apiserver            2                   3bce0f832a4b7       kube-apiserver-ha-821265
	594b38d4c919f       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      5 minutes ago       Exited              kube-controller-manager   1                   c83e50830d3f4       kube-controller-manager-ha-821265
	65b77e388cf4a       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Exited              kindnet-cni               2                   5718ac2f01073       kindnet-qbq9z
	4bec52a480e1b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   97870b1b56dc1       coredns-7db6d8ff4d-ht7jl
	f9e45e23c690b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   82d54024bc68a       busybox-fc5497c4f-b4r5w
	28dbe3373b660       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   126db08ea55ac       coredns-7db6d8ff4d-ht7jl
	609e2855f754c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   84aaf42f76a8a       coredns-7db6d8ff4d-ft2jl
	1f43ea569f86c       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      15 minutes ago      Exited              kube-proxy                0                   626e64c737b2d       kube-proxy-w7r9d
	2b3935bd9c893       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      16 minutes ago      Exited              kube-scheduler            0                   68a372e9f954b       kube-scheduler-ha-821265
	ba49f85435f20       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   f773251009c17       etcd-ha-821265
	
	
	==> coredns [28dbe3373b660061d706be023eb9515318a583f0f1fb735faf35e2fffc13f391] <==
	[INFO] 10.244.1.2:43358 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000160709s
	[INFO] 10.244.1.2:55629 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000195731s
	[INFO] 10.244.1.2:44290 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121655s
	[INFO] 10.244.1.2:57358 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121564s
	[INFO] 10.244.2.2:59048 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159182s
	[INFO] 10.244.2.2:35567 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001954066s
	[INFO] 10.244.2.2:51799 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000221645s
	[INFO] 10.244.2.2:34300 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001398818s
	[INFO] 10.244.2.2:44605 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141089s
	[INFO] 10.244.2.2:60699 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114317s
	[INFO] 10.244.2.2:47652 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110384s
	[INFO] 10.244.0.4:58761 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147629s
	[INFO] 10.244.0.4:45372 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000061515s
	[INFO] 10.244.1.2:39990 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000301231s
	[INFO] 10.244.2.2:38384 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218658s
	[INFO] 10.244.2.2:42087 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096499s
	[INFO] 10.244.2.2:46418 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091631s
	[INFO] 10.244.0.4:38705 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000140004s
	[INFO] 10.244.2.2:47355 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124377s
	[INFO] 10.244.2.2:41383 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000176022s
	[INFO] 10.244.2.2:36036 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000263019s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1887&timeout=6m7s&timeoutSeconds=367&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1881&timeout=7m19s&timeoutSeconds=439&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [4bec52a480e1ba363d05d69d01be4a0cee8746d920f0865e277f9c21bc87cbe3] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[2107535893]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Apr-2024 11:18:28.211) (total time: 10002ms):
	Trace[2107535893]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (11:18:38.214)
	Trace[2107535893]: [10.002204922s] [10.002204922s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:35706->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1607095593]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Apr-2024 11:18:31.870) (total time: 12288ms):
	Trace[1607095593]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:35706->10.96.0.1:443: read: connection reset by peer 12288ms (11:18:44.158)
	Trace[1607095593]: [12.288380026s] [12.288380026s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:35706->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [609e2855f754ce317a2cad749d16f773349ffc248725fb9417b473c9be8df139] <==
	[INFO] 10.244.1.2:55844 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178035s
	[INFO] 10.244.2.2:56677 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000145596s
	[INFO] 10.244.2.2:55471 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000502508s
	[INFO] 10.244.0.4:48892 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000180363s
	[INFO] 10.244.0.4:39631 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00015636s
	[INFO] 10.244.1.2:41139 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001436054s
	[INFO] 10.244.1.2:50039 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000238831s
	[INFO] 10.244.2.2:49593 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099929s
	[INFO] 10.244.0.4:33617 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078273s
	[INFO] 10.244.0.4:35287 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000154317s
	[INFO] 10.244.1.2:52682 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133804s
	[INFO] 10.244.1.2:40594 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130792s
	[INFO] 10.244.1.2:39775 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009509s
	[INFO] 10.244.2.2:55863 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00021768s
	[INFO] 10.244.0.4:36835 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092568s
	[INFO] 10.244.0.4:53708 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00016929s
	[INFO] 10.244.0.4:44024 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000203916s
	[INFO] 10.244.1.2:50167 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158884s
	[INFO] 10.244.1.2:49103 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000120664s
	[INFO] 10.244.1.2:44739 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000212444s
	[INFO] 10.244.1.2:43569 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000207516s
	[INFO] 10.244.2.2:48876 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000228682s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1878&timeout=5m19s&timeoutSeconds=319&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [aba04655540d31d9ebf03299c794d1c3c3623ea9026908acaac007ac12a740b4] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43898->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[638564627]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Apr-2024 11:18:31.728) (total time: 12428ms):
	Trace[638564627]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43898->10.96.0.1:443: read: connection reset by peer 12428ms (11:18:44.157)
	Trace[638564627]: [12.428753633s] [12.428753633s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43898->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43892->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[63172965]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Apr-2024 11:18:31.714) (total time: 12443ms):
	Trace[63172965]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43892->10.96.0.1:443: read: connection reset by peer 12443ms (11:18:44.157)
	Trace[63172965]: [12.443525444s] [12.443525444s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43892->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-821265
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-821265
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437
	                    minikube.k8s.io/name=ha-821265
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_22T11_07_22_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 11:07:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-821265
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 11:23:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 11:19:07 +0000   Mon, 22 Apr 2024 11:07:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 11:19:07 +0000   Mon, 22 Apr 2024 11:07:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 11:19:07 +0000   Mon, 22 Apr 2024 11:07:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 11:19:07 +0000   Mon, 22 Apr 2024 11:07:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.150
	  Hostname:    ha-821265
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e3708e3d49144fe9a219d30c45824055
	  System UUID:                e3708e3d-4914-4fe9-a219-d30c45824055
	  Boot ID:                    59d6bf31-99bc-4f8f-942a-1d3384515d3f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-b4r5w              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-ft2jl             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-ht7jl             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-ha-821265                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-qbq9z                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-821265             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-821265    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-w7r9d                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-821265             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-821265                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4m25s              kube-proxy       
	  Normal   Starting                 15m                kube-proxy       
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node ha-821265 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node ha-821265 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node ha-821265 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     16m                kubelet          Node ha-821265 status is now: NodeHasSufficientPID
	  Normal   Starting                 16m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16m                kubelet          Node ha-821265 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m                kubelet          Node ha-821265 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           15m                node-controller  Node ha-821265 event: Registered Node ha-821265 in Controller
	  Normal   NodeReady                15m                kubelet          Node ha-821265 status is now: NodeReady
	  Normal   RegisteredNode           14m                node-controller  Node ha-821265 event: Registered Node ha-821265 in Controller
	  Normal   RegisteredNode           13m                node-controller  Node ha-821265 event: Registered Node ha-821265 in Controller
	  Warning  ContainerGCFailed        6m7s               kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m11s              node-controller  Node ha-821265 event: Registered Node ha-821265 in Controller
	  Normal   RegisteredNode           4m7s               node-controller  Node ha-821265 event: Registered Node ha-821265 in Controller
	  Normal   RegisteredNode           3m12s              node-controller  Node ha-821265 event: Registered Node ha-821265 in Controller
	
	
	Name:               ha-821265-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-821265-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437
	                    minikube.k8s.io/name=ha-821265
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_22T11_08_32_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 11:08:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-821265-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 11:23:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 11:19:53 +0000   Mon, 22 Apr 2024 11:19:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 11:19:53 +0000   Mon, 22 Apr 2024 11:19:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 11:19:53 +0000   Mon, 22 Apr 2024 11:19:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 11:19:53 +0000   Mon, 22 Apr 2024 11:19:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.39
	  Hostname:    ha-821265-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee4ee33670c847d689ce31a8a149631b
	  System UUID:                ee4ee336-70c8-47d6-89ce-31a8a149631b
	  Boot ID:                    13e93955-74b9-4dbe-9ed2-e9a9f309e501
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ft78k                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-821265-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-jm2pd                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-821265-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-821265-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-j2hpk                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-821265-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-821265-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m13s                  kube-proxy       
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-821265-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-821265-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-821265-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                    node-controller  Node ha-821265-m02 event: Registered Node ha-821265-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-821265-m02 event: Registered Node ha-821265-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-821265-m02 event: Registered Node ha-821265-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-821265-m02 status is now: NodeNotReady
	  Normal  Starting                 4m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m44s (x8 over 4m44s)  kubelet          Node ha-821265-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m44s (x8 over 4m44s)  kubelet          Node ha-821265-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m44s (x7 over 4m44s)  kubelet          Node ha-821265-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-821265-m02 event: Registered Node ha-821265-m02 in Controller
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-821265-m02 event: Registered Node ha-821265-m02 in Controller
	  Normal  RegisteredNode           3m12s                  node-controller  Node ha-821265-m02 event: Registered Node ha-821265-m02 in Controller
	
	
	Name:               ha-821265-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-821265-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437
	                    minikube.k8s.io/name=ha-821265
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_22T11_10_47_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 11:10:46 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-821265-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 11:20:59 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 22 Apr 2024 11:20:39 +0000   Mon, 22 Apr 2024 11:21:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 22 Apr 2024 11:20:39 +0000   Mon, 22 Apr 2024 11:21:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 22 Apr 2024 11:20:39 +0000   Mon, 22 Apr 2024 11:21:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 22 Apr 2024 11:20:39 +0000   Mon, 22 Apr 2024 11:21:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.252
	  Hostname:    ha-821265-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 dd9646c23a234a60a7a73b7377025a34
	  System UUID:                dd9646c2-3a23-4a60-a7a7-3b7377025a34
	  Boot ID:                    fd5616ec-d6c4-4418-82ba-4bb6990e0f81
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-kwjh2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-gvgbm              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-hdvbv           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 2m45s                  kube-proxy       
	  Normal   RegisteredNode           12m                    node-controller  Node ha-821265-m04 event: Registered Node ha-821265-m04 in Controller
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-821265-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-821265-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-821265-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-821265-m04 event: Registered Node ha-821265-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-821265-m04 event: Registered Node ha-821265-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-821265-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m11s                  node-controller  Node ha-821265-m04 event: Registered Node ha-821265-m04 in Controller
	  Normal   RegisteredNode           4m7s                   node-controller  Node ha-821265-m04 event: Registered Node ha-821265-m04 in Controller
	  Normal   NodeNotReady             3m31s                  node-controller  Node ha-821265-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m12s                  node-controller  Node ha-821265-m04 event: Registered Node ha-821265-m04 in Controller
	  Normal   Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m49s (x3 over 2m49s)  kubelet          Node ha-821265-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m49s (x3 over 2m49s)  kubelet          Node ha-821265-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m49s (x3 over 2m49s)  kubelet          Node ha-821265-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m49s (x2 over 2m49s)  kubelet          Node ha-821265-m04 has been rebooted, boot id: fd5616ec-d6c4-4418-82ba-4bb6990e0f81
	  Normal   NodeReady                2m49s (x2 over 2m49s)  kubelet          Node ha-821265-m04 status is now: NodeReady
	  Normal   NodeNotReady             107s                   node-controller  Node ha-821265-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Apr22 11:07] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.062413] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064974] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.181323] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.148920] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.299663] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +4.930467] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +0.065860] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.137174] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.064357] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.162362] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.079557] kauditd_printk_skb: 79 callbacks suppressed
	[ +16.384158] kauditd_printk_skb: 21 callbacks suppressed
	[Apr22 11:08] kauditd_printk_skb: 74 callbacks suppressed
	[Apr22 11:15] kauditd_printk_skb: 1 callbacks suppressed
	[Apr22 11:18] systemd-fstab-generator[3761]: Ignoring "noauto" option for root device
	[  +0.159564] systemd-fstab-generator[3773]: Ignoring "noauto" option for root device
	[  +0.190785] systemd-fstab-generator[3787]: Ignoring "noauto" option for root device
	[  +0.167063] systemd-fstab-generator[3799]: Ignoring "noauto" option for root device
	[  +0.313719] systemd-fstab-generator[3827]: Ignoring "noauto" option for root device
	[  +1.090326] systemd-fstab-generator[3935]: Ignoring "noauto" option for root device
	[  +3.195514] kauditd_printk_skb: 202 callbacks suppressed
	[ +11.480878] kauditd_printk_skb: 5 callbacks suppressed
	[ +10.074747] kauditd_printk_skb: 1 callbacks suppressed
	[Apr22 11:19] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [ba49f85435f20c5e5f43a285a09c2344ab0eb98efa1efcef40308fec44a77803] <==
	2024/04/22 11:16:43 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/22 11:16:43 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/22 11:16:43 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/22 11:16:43 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/22 11:16:43 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/22 11:16:43 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/22 11:16:43 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-04-22T11:16:43.48226Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"2236e2deb63504cb","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-04-22T11:16:43.482492Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e74e5cf98cfb462d"}
	{"level":"info","ts":"2024-04-22T11:16:43.482616Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e74e5cf98cfb462d"}
	{"level":"info","ts":"2024-04-22T11:16:43.482674Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e74e5cf98cfb462d"}
	{"level":"info","ts":"2024-04-22T11:16:43.482821Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d"}
	{"level":"info","ts":"2024-04-22T11:16:43.482895Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d"}
	{"level":"info","ts":"2024-04-22T11:16:43.482957Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"2236e2deb63504cb","remote-peer-id":"e74e5cf98cfb462d"}
	{"level":"info","ts":"2024-04-22T11:16:43.482994Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e74e5cf98cfb462d"}
	{"level":"info","ts":"2024-04-22T11:16:43.483047Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"67256953526d7fbe"}
	{"level":"info","ts":"2024-04-22T11:16:43.483083Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"67256953526d7fbe"}
	{"level":"info","ts":"2024-04-22T11:16:43.483133Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"67256953526d7fbe"}
	{"level":"info","ts":"2024-04-22T11:16:43.483246Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"2236e2deb63504cb","remote-peer-id":"67256953526d7fbe"}
	{"level":"info","ts":"2024-04-22T11:16:43.48331Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"2236e2deb63504cb","remote-peer-id":"67256953526d7fbe"}
	{"level":"info","ts":"2024-04-22T11:16:43.483381Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"2236e2deb63504cb","remote-peer-id":"67256953526d7fbe"}
	{"level":"info","ts":"2024-04-22T11:16:43.483417Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"67256953526d7fbe"}
	{"level":"info","ts":"2024-04-22T11:16:43.487528Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.150:2380"}
	{"level":"info","ts":"2024-04-22T11:16:43.487779Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.150:2380"}
	{"level":"info","ts":"2024-04-22T11:16:43.487838Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-821265","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.150:2380"],"advertise-client-urls":["https://192.168.39.150:2379"]}
	
	
	==> etcd [d27ec30a0ad7913b7b1f7c2670ac92c8e3c79a52b67ccae30cf41067898e375c] <==
	{"level":"info","ts":"2024-04-22T11:19:59.114318Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"2236e2deb63504cb","remote-peer-id":"67256953526d7fbe"}
	{"level":"info","ts":"2024-04-22T11:19:59.122922Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"2236e2deb63504cb","remote-peer-id":"67256953526d7fbe"}
	{"level":"info","ts":"2024-04-22T11:19:59.124026Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"2236e2deb63504cb","to":"67256953526d7fbe","stream-type":"stream Message"}
	{"level":"info","ts":"2024-04-22T11:19:59.124087Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"2236e2deb63504cb","remote-peer-id":"67256953526d7fbe"}
	{"level":"warn","ts":"2024-04-22T11:19:59.142662Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.95:45970","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-04-22T11:20:00.155319Z","caller":"traceutil/trace.go:171","msg":"trace[1805991195] transaction","detail":"{read_only:false; response_revision:2351; number_of_response:1; }","duration":"159.448883ms","start":"2024-04-22T11:19:59.995828Z","end":"2024-04-22T11:20:00.155277Z","steps":["trace[1805991195] 'process raft request'  (duration: 157.620199ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T11:20:00.705224Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"67256953526d7fbe","rtt":"0s","error":"dial tcp 192.168.39.95:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-22T11:20:53.925891Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.95:55726","server-name":"","error":"read tcp 192.168.39.150:2379->192.168.39.95:55726: read: connection reset by peer"}
	{"level":"info","ts":"2024-04-22T11:20:53.959457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2236e2deb63504cb switched to configuration voters=(2465407292199470283 16667361497826674221)"}
	{"level":"info","ts":"2024-04-22T11:20:53.962174Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"d5d2d7cf60dc9e96","local-member-id":"2236e2deb63504cb","removed-remote-peer-id":"67256953526d7fbe","removed-remote-peer-urls":["https://192.168.39.95:2380"]}
	{"level":"info","ts":"2024-04-22T11:20:53.962369Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"67256953526d7fbe"}
	{"level":"warn","ts":"2024-04-22T11:20:53.96285Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"67256953526d7fbe"}
	{"level":"info","ts":"2024-04-22T11:20:53.962933Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"67256953526d7fbe"}
	{"level":"warn","ts":"2024-04-22T11:20:53.963607Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"67256953526d7fbe"}
	{"level":"info","ts":"2024-04-22T11:20:53.963677Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"67256953526d7fbe"}
	{"level":"info","ts":"2024-04-22T11:20:53.963833Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"2236e2deb63504cb","remote-peer-id":"67256953526d7fbe"}
	{"level":"warn","ts":"2024-04-22T11:20:53.964107Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"2236e2deb63504cb","remote-peer-id":"67256953526d7fbe","error":"context canceled"}
	{"level":"warn","ts":"2024-04-22T11:20:53.96419Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"67256953526d7fbe","error":"failed to read 67256953526d7fbe on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-04-22T11:20:53.964248Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"2236e2deb63504cb","remote-peer-id":"67256953526d7fbe"}
	{"level":"warn","ts":"2024-04-22T11:20:53.964418Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"2236e2deb63504cb","remote-peer-id":"67256953526d7fbe","error":"context canceled"}
	{"level":"info","ts":"2024-04-22T11:20:53.964476Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"2236e2deb63504cb","remote-peer-id":"67256953526d7fbe"}
	{"level":"info","ts":"2024-04-22T11:20:53.964628Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"67256953526d7fbe"}
	{"level":"info","ts":"2024-04-22T11:20:53.964709Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"2236e2deb63504cb","removed-remote-peer-id":"67256953526d7fbe"}
	{"level":"warn","ts":"2024-04-22T11:20:53.980716Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"2236e2deb63504cb","remote-peer-id-stream-handler":"2236e2deb63504cb","remote-peer-id-from":"67256953526d7fbe"}
	{"level":"warn","ts":"2024-04-22T11:20:53.990666Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"2236e2deb63504cb","remote-peer-id-stream-handler":"2236e2deb63504cb","remote-peer-id-from":"67256953526d7fbe"}
	
	
	==> kernel <==
	 11:23:29 up 16 min,  0 users,  load average: 0.07, 0.31, 0.28
	Linux ha-821265 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [65b77e388cf4a08ba51bd96cd087b8b7ef6a23d957a16f1deb5ca55943ffe9f4] <==
	I0422 11:18:19.882446       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0422 11:18:19.884707       1 main.go:107] hostIP = 192.168.39.150
	podIP = 192.168.39.150
	I0422 11:18:19.884929       1 main.go:116] setting mtu 1500 for CNI 
	I0422 11:18:19.884944       1 main.go:146] kindnetd IP family: "ipv4"
	I0422 11:18:19.884967       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0422 11:18:20.208630       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0422 11:18:22.653121       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0422 11:18:25.725425       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0422 11:18:37.729113       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0422 11:18:41.085059       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [bdbfadb4ed8d19096021578583930566b38bffb62ac75a0fa4bfa1854bc51c07] <==
	I0422 11:22:45.092276       1 main.go:250] Node ha-821265-m04 has CIDR [10.244.3.0/24] 
	I0422 11:22:55.106831       1 main.go:223] Handling node with IPs: map[192.168.39.150:{}]
	I0422 11:22:55.106930       1 main.go:227] handling current node
	I0422 11:22:55.106942       1 main.go:223] Handling node with IPs: map[192.168.39.39:{}]
	I0422 11:22:55.106954       1 main.go:250] Node ha-821265-m02 has CIDR [10.244.1.0/24] 
	I0422 11:22:55.107057       1 main.go:223] Handling node with IPs: map[192.168.39.252:{}]
	I0422 11:22:55.107091       1 main.go:250] Node ha-821265-m04 has CIDR [10.244.3.0/24] 
	I0422 11:23:05.113609       1 main.go:223] Handling node with IPs: map[192.168.39.150:{}]
	I0422 11:23:05.113719       1 main.go:227] handling current node
	I0422 11:23:05.113748       1 main.go:223] Handling node with IPs: map[192.168.39.39:{}]
	I0422 11:23:05.113767       1 main.go:250] Node ha-821265-m02 has CIDR [10.244.1.0/24] 
	I0422 11:23:05.113888       1 main.go:223] Handling node with IPs: map[192.168.39.252:{}]
	I0422 11:23:05.113908       1 main.go:250] Node ha-821265-m04 has CIDR [10.244.3.0/24] 
	I0422 11:23:15.130299       1 main.go:223] Handling node with IPs: map[192.168.39.150:{}]
	I0422 11:23:15.130359       1 main.go:227] handling current node
	I0422 11:23:15.130375       1 main.go:223] Handling node with IPs: map[192.168.39.39:{}]
	I0422 11:23:15.130384       1 main.go:250] Node ha-821265-m02 has CIDR [10.244.1.0/24] 
	I0422 11:23:15.130523       1 main.go:223] Handling node with IPs: map[192.168.39.252:{}]
	I0422 11:23:15.130670       1 main.go:250] Node ha-821265-m04 has CIDR [10.244.3.0/24] 
	I0422 11:23:25.138632       1 main.go:223] Handling node with IPs: map[192.168.39.150:{}]
	I0422 11:23:25.138675       1 main.go:227] handling current node
	I0422 11:23:25.138687       1 main.go:223] Handling node with IPs: map[192.168.39.39:{}]
	I0422 11:23:25.138693       1 main.go:250] Node ha-821265-m02 has CIDR [10.244.1.0/24] 
	I0422 11:23:25.138796       1 main.go:223] Handling node with IPs: map[192.168.39.252:{}]
	I0422 11:23:25.138800       1 main.go:250] Node ha-821265-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [2f9805a7cceb20ebe5dd98c40f4989b29929823f101fc5fa3e52ce922be823cf] <==
	I0422 11:19:05.011039       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0422 11:19:05.110290       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0422 11:19:05.130094       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0422 11:19:05.130308       1 policy_source.go:224] refreshing policies
	I0422 11:19:05.130267       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0422 11:19:05.134676       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0422 11:19:05.135220       1 shared_informer.go:320] Caches are synced for configmaps
	I0422 11:19:05.136499       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0422 11:19:05.136690       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0422 11:19:05.137358       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0422 11:19:05.143144       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0422 11:19:05.143254       1 aggregator.go:165] initial CRD sync complete...
	I0422 11:19:05.143294       1 autoregister_controller.go:141] Starting autoregister controller
	I0422 11:19:05.143317       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0422 11:19:05.143340       1 cache.go:39] Caches are synced for autoregister controller
	I0422 11:19:05.144658       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0422 11:19:05.155166       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.95]
	I0422 11:19:05.156945       1 controller.go:615] quota admission added evaluator for: endpoints
	I0422 11:19:05.165822       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0422 11:19:05.173518       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0422 11:19:05.210386       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0422 11:19:05.944101       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0422 11:19:06.299150       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.150 192.168.39.95]
	W0422 11:19:26.297311       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.150 192.168.39.39]
	W0422 11:21:06.305813       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.150 192.168.39.39]
	
	
	==> kube-apiserver [fb1b67b39ae4f69aabbcde5efda711e2ad29a9f6926b4f1e78b54d9fcd92ed97] <==
	I0422 11:18:20.315680       1 options.go:221] external host was not specified, using 192.168.39.150
	I0422 11:18:20.324054       1 server.go:148] Version: v1.30.0
	I0422 11:18:20.324090       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 11:18:21.328015       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0422 11:18:21.331619       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0422 11:18:21.333210       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0422 11:18:21.333408       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0422 11:18:21.333646       1 instance.go:299] Using reconciler: lease
	W0422 11:18:41.328452       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0422 11:18:41.328893       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0422 11:18:41.334765       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0422 11:18:41.334925       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [06935d6ef805f3d3cf6a05bcb64dc081d72aae88db019d142b68750d3cf1c867] <==
	E0422 11:21:37.522069       1 gc_controller.go:153] "Failed to get node" err="node \"ha-821265-m03\" not found" logger="pod-garbage-collector-controller" node="ha-821265-m03"
	E0422 11:21:37.522109       1 gc_controller.go:153] "Failed to get node" err="node \"ha-821265-m03\" not found" logger="pod-garbage-collector-controller" node="ha-821265-m03"
	E0422 11:21:37.522133       1 gc_controller.go:153] "Failed to get node" err="node \"ha-821265-m03\" not found" logger="pod-garbage-collector-controller" node="ha-821265-m03"
	E0422 11:21:37.522160       1 gc_controller.go:153] "Failed to get node" err="node \"ha-821265-m03\" not found" logger="pod-garbage-collector-controller" node="ha-821265-m03"
	I0422 11:21:41.146304       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.859598ms"
	I0422 11:21:41.146762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.572µs"
	E0422 11:21:57.522608       1 gc_controller.go:153] "Failed to get node" err="node \"ha-821265-m03\" not found" logger="pod-garbage-collector-controller" node="ha-821265-m03"
	E0422 11:21:57.522665       1 gc_controller.go:153] "Failed to get node" err="node \"ha-821265-m03\" not found" logger="pod-garbage-collector-controller" node="ha-821265-m03"
	E0422 11:21:57.522677       1 gc_controller.go:153] "Failed to get node" err="node \"ha-821265-m03\" not found" logger="pod-garbage-collector-controller" node="ha-821265-m03"
	E0422 11:21:57.522682       1 gc_controller.go:153] "Failed to get node" err="node \"ha-821265-m03\" not found" logger="pod-garbage-collector-controller" node="ha-821265-m03"
	E0422 11:21:57.522688       1 gc_controller.go:153] "Failed to get node" err="node \"ha-821265-m03\" not found" logger="pod-garbage-collector-controller" node="ha-821265-m03"
	I0422 11:21:57.539042       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-821265-m03"
	I0422 11:21:57.568403       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-821265-m03"
	I0422 11:21:57.569331       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-821265-m03"
	I0422 11:21:57.609791       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-821265-m03"
	I0422 11:21:57.609893       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-821265-m03"
	I0422 11:21:57.658084       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-821265-m03"
	I0422 11:21:57.658175       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-821265-m03"
	I0422 11:21:57.691878       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-821265-m03"
	I0422 11:21:57.691997       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-821265-m03"
	I0422 11:21:57.732631       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-821265-m03"
	I0422 11:21:57.732768       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-d8qgr"
	I0422 11:21:57.769934       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-d8qgr"
	I0422 11:21:57.770042       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-lmhp7"
	I0422 11:21:57.833224       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-lmhp7"
	
	
	==> kube-controller-manager [594b38d4c919f0bd4386634ffa22d99b282bd1d1e2d832d8b64e67b021e866d4] <==
	I0422 11:18:21.135631       1 serving.go:380] Generated self-signed cert in-memory
	I0422 11:18:21.900817       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0422 11:18:21.900911       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 11:18:21.902992       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0422 11:18:21.903104       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0422 11:18:21.903123       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0422 11:18:21.903138       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0422 11:18:42.342400       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.150:8443/healthz\": dial tcp 192.168.39.150:8443: connect: connection refused"
	
	
	==> kube-proxy [1f43ea569f86c6da0221ad11b84975f3fab68f9999821a395b05d1afb49f5269] <==
	E0422 11:15:37.406328       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1887": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 11:15:37.406398       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 11:15:37.406445       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 11:15:37.406416       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-821265&resourceVersion=1877": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 11:15:37.406486       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-821265&resourceVersion=1877": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 11:15:43.549671       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-821265&resourceVersion=1877": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 11:15:43.549798       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-821265&resourceVersion=1877": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 11:15:43.549915       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1887": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 11:15:43.549958       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1887": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 11:15:43.550025       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 11:15:43.550054       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 11:15:52.766496       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-821265&resourceVersion=1877": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 11:15:52.766686       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-821265&resourceVersion=1877": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 11:15:52.766773       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 11:15:52.767043       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 11:15:55.837807       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1887": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 11:15:55.837887       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1887": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 11:16:08.125822       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-821265&resourceVersion=1877": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 11:16:08.126316       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-821265&resourceVersion=1877": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 11:16:11.197210       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 11:16:11.197284       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 11:16:14.270702       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1887": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 11:16:14.270799       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1887": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 11:16:38.846135       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 11:16:38.846537       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [45ee3a04fea005538c47381509fdf3d9e53cfe0bb8e8e14149e912ea8a67cfd8] <==
	I0422 11:19:03.276816       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 11:19:03.276932       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 11:19:03.276956       1 server_linux.go:165] "Using iptables Proxier"
	I0422 11:19:03.280089       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 11:19:03.280481       1 server.go:872] "Version info" version="v1.30.0"
	I0422 11:19:03.280605       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 11:19:03.282987       1 config.go:192] "Starting service config controller"
	I0422 11:19:03.283046       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 11:19:03.283096       1 config.go:101] "Starting endpoint slice config controller"
	I0422 11:19:03.283113       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 11:19:03.284023       1 config.go:319] "Starting node config controller"
	I0422 11:19:03.284068       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0422 11:19:06.303360       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0422 11:19:06.303797       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 11:19:06.304046       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 11:19:06.304014       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 11:19:06.304498       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 11:19:06.303899       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-821265&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 11:19:06.304772       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-821265&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0422 11:19:07.183470       1 shared_informer.go:320] Caches are synced for service config
	I0422 11:19:07.484367       1 shared_informer.go:320] Caches are synced for node config
	I0422 11:19:07.585488       1 shared_informer.go:320] Caches are synced for endpoint slice config
	W0422 11:21:51.323962       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0422 11:21:51.324110       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0422 11:21:51.324158       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-scheduler [086db7b19ea3bed15f2bf46fc53e6befb389e2aa6d163eb0290b45841b20a974] <==
	W0422 11:19:05.048988       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0422 11:19:05.048998       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0422 11:19:05.049034       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0422 11:19:05.049072       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0422 11:19:05.049122       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0422 11:19:05.049132       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0422 11:19:05.049225       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0422 11:19:05.049263       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0422 11:19:05.049308       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0422 11:19:05.049316       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0422 11:19:05.049359       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0422 11:19:05.049407       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0422 11:19:05.049444       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0422 11:19:05.049458       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0422 11:19:05.049499       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0422 11:19:05.049508       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0422 11:19:05.049654       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0422 11:19:05.049693       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0422 11:19:05.049794       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0422 11:19:05.049831       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0422 11:19:18.960696       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0422 11:20:50.615347       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-kwjh2\": pod busybox-fc5497c4f-kwjh2 is already assigned to node \"ha-821265-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-kwjh2" node="ha-821265-m04"
	E0422 11:20:50.624196       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod c2c1aae4-834b-4851-bea8-c4e978acaa03(default/busybox-fc5497c4f-kwjh2) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-kwjh2"
	E0422 11:20:50.624983       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-kwjh2\": pod busybox-fc5497c4f-kwjh2 is already assigned to node \"ha-821265-m04\"" pod="default/busybox-fc5497c4f-kwjh2"
	I0422 11:20:50.625073       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-kwjh2" node="ha-821265-m04"
	
	
	==> kube-scheduler [2b3935bd9c893d367d1a96bbfe83c0b1d50125bfe3272fe02c29b282d1d35de5] <==
	W0422 11:16:39.610805       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0422 11:16:39.610902       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0422 11:16:40.000188       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0422 11:16:40.000306       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0422 11:16:40.072335       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0422 11:16:40.072405       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0422 11:16:40.169436       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0422 11:16:40.169497       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0422 11:16:40.179026       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0422 11:16:40.179061       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0422 11:16:40.312136       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0422 11:16:40.312206       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0422 11:16:40.332795       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0422 11:16:40.332883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0422 11:16:40.405487       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0422 11:16:40.405706       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0422 11:16:40.498938       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0422 11:16:40.499043       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0422 11:16:42.007234       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0422 11:16:42.007294       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0422 11:16:42.487490       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0422 11:16:42.487759       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0422 11:16:43.289079       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0422 11:16:43.289144       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0422 11:16:43.406821       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 22 11:19:21 ha-821265 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 11:19:22 ha-821265 kubelet[1370]: I0422 11:19:22.582905    1370 scope.go:117] "RemoveContainer" containerID="d3cbf7c282792930e1df477971a4bd28b78cb49c295f6e0ac2c8a454824de5d2"
	Apr 22 11:19:33 ha-821265 kubelet[1370]: I0422 11:19:33.582422    1370 scope.go:117] "RemoveContainer" containerID="65b77e388cf4a08ba51bd96cd087b8b7ef6a23d957a16f1deb5ca55943ffe9f4"
	Apr 22 11:19:55 ha-821265 kubelet[1370]: I0422 11:19:55.582463    1370 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-821265" podUID="9322f0ee-9e3e-4585-9388-44ccd1417371"
	Apr 22 11:19:55 ha-821265 kubelet[1370]: I0422 11:19:55.604931    1370 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-821265"
	Apr 22 11:20:21 ha-821265 kubelet[1370]: E0422 11:20:21.619185    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 11:20:21 ha-821265 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 11:20:21 ha-821265 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 11:20:21 ha-821265 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 11:20:21 ha-821265 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 11:21:21 ha-821265 kubelet[1370]: E0422 11:21:21.620102    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 11:21:21 ha-821265 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 11:21:21 ha-821265 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 11:21:21 ha-821265 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 11:21:21 ha-821265 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 11:22:21 ha-821265 kubelet[1370]: E0422 11:22:21.619915    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 11:22:21 ha-821265 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 11:22:21 ha-821265 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 11:22:21 ha-821265 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 11:22:21 ha-821265 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 11:23:21 ha-821265 kubelet[1370]: E0422 11:23:21.620037    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 11:23:21 ha-821265 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 11:23:21 ha-821265 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 11:23:21 ha-821265 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 11:23:21 ha-821265 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 11:23:27.778488   36725 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18711-7633/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-821265 -n ha-821265
helpers_test.go:261: (dbg) Run:  kubectl --context ha-821265 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (310.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-254635
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-254635
E0422 11:41:40.376310   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
E0422 11:41:57.325215   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-254635: exit status 82 (2m2.707445572s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-254635-m03"  ...
	* Stopping node "multinode-254635-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-254635" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-254635 --wait=true -v=8 --alsologtostderr
E0422 11:44:20.692145   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
E0422 11:46:17.644041   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-254635 --wait=true -v=8 --alsologtostderr: (3m5.599624369s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-254635
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-254635 -n multinode-254635
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-254635 logs -n 25: (1.635832101s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-254635 ssh -n                                                                 | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | multinode-254635-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-254635 cp multinode-254635-m02:/home/docker/cp-test.txt                       | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile714579271/001/cp-test_multinode-254635-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-254635 ssh -n                                                                 | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | multinode-254635-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-254635 cp multinode-254635-m02:/home/docker/cp-test.txt                       | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | multinode-254635:/home/docker/cp-test_multinode-254635-m02_multinode-254635.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-254635 ssh -n                                                                 | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | multinode-254635-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-254635 ssh -n multinode-254635 sudo cat                                       | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | /home/docker/cp-test_multinode-254635-m02_multinode-254635.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-254635 cp multinode-254635-m02:/home/docker/cp-test.txt                       | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | multinode-254635-m03:/home/docker/cp-test_multinode-254635-m02_multinode-254635-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-254635 ssh -n                                                                 | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | multinode-254635-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-254635 ssh -n multinode-254635-m03 sudo cat                                   | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | /home/docker/cp-test_multinode-254635-m02_multinode-254635-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-254635 cp testdata/cp-test.txt                                                | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | multinode-254635-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-254635 ssh -n                                                                 | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | multinode-254635-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-254635 cp multinode-254635-m03:/home/docker/cp-test.txt                       | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile714579271/001/cp-test_multinode-254635-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-254635 ssh -n                                                                 | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | multinode-254635-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-254635 cp multinode-254635-m03:/home/docker/cp-test.txt                       | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | multinode-254635:/home/docker/cp-test_multinode-254635-m03_multinode-254635.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-254635 ssh -n                                                                 | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | multinode-254635-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-254635 ssh -n multinode-254635 sudo cat                                       | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | /home/docker/cp-test_multinode-254635-m03_multinode-254635.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-254635 cp multinode-254635-m03:/home/docker/cp-test.txt                       | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | multinode-254635-m02:/home/docker/cp-test_multinode-254635-m03_multinode-254635-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-254635 ssh -n                                                                 | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | multinode-254635-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-254635 ssh -n multinode-254635-m02 sudo cat                                   | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | /home/docker/cp-test_multinode-254635-m03_multinode-254635-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-254635 node stop m03                                                          | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	| node    | multinode-254635 node start                                                             | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:41 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-254635                                                                | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:41 UTC |                     |
	| stop    | -p multinode-254635                                                                     | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:41 UTC |                     |
	| start   | -p multinode-254635                                                                     | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:43 UTC | 22 Apr 24 11:46 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-254635                                                                | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:46 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 11:43:21
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 11:43:21.817508   46587 out.go:291] Setting OutFile to fd 1 ...
	I0422 11:43:21.817634   46587 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:43:21.817644   46587 out.go:304] Setting ErrFile to fd 2...
	I0422 11:43:21.817649   46587 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:43:21.817854   46587 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
	I0422 11:43:21.818393   46587 out.go:298] Setting JSON to false
	I0422 11:43:21.819370   46587 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5145,"bootTime":1713781057,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 11:43:21.819425   46587 start.go:139] virtualization: kvm guest
	I0422 11:43:21.822027   46587 out.go:177] * [multinode-254635] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 11:43:21.823920   46587 out.go:177]   - MINIKUBE_LOCATION=18711
	I0422 11:43:21.825792   46587 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 11:43:21.823935   46587 notify.go:220] Checking for updates...
	I0422 11:43:21.828554   46587 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18711-7633/kubeconfig
	I0422 11:43:21.830102   46587 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 11:43:21.831396   46587 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 11:43:21.832735   46587 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 11:43:21.834521   46587 config.go:182] Loaded profile config "multinode-254635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:43:21.834641   46587 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 11:43:21.835102   46587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:43:21.835147   46587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:43:21.850203   46587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37641
	I0422 11:43:21.850629   46587 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:43:21.851103   46587 main.go:141] libmachine: Using API Version  1
	I0422 11:43:21.851125   46587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:43:21.851534   46587 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:43:21.851743   46587 main.go:141] libmachine: (multinode-254635) Calling .DriverName
	I0422 11:43:21.888032   46587 out.go:177] * Using the kvm2 driver based on existing profile
	I0422 11:43:21.889327   46587 start.go:297] selected driver: kvm2
	I0422 11:43:21.889339   46587 start.go:901] validating driver "kvm2" against &{Name:multinode-254635 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterNam
e:multinode-254635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.75 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget
:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 11:43:21.889469   46587 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 11:43:21.889765   46587 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 11:43:21.889829   46587 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18711-7633/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 11:43:21.903976   46587 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 11:43:21.905022   46587 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 11:43:21.905126   46587 cni.go:84] Creating CNI manager for ""
	I0422 11:43:21.905137   46587 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0422 11:43:21.905238   46587 start.go:340] cluster config:
	{Name:multinode-254635 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-254635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.75 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubev
irt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 11:43:21.905519   46587 iso.go:125] acquiring lock: {Name:mkb6ac9fd17ffabc92a94047094130aad6203a95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 11:43:21.908043   46587 out.go:177] * Starting "multinode-254635" primary control-plane node in "multinode-254635" cluster
	I0422 11:43:21.909519   46587 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 11:43:21.909563   46587 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0422 11:43:21.909578   46587 cache.go:56] Caching tarball of preloaded images
	I0422 11:43:21.909662   46587 preload.go:173] Found /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 11:43:21.909677   46587 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 11:43:21.909836   46587 profile.go:143] Saving config to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/multinode-254635/config.json ...
	I0422 11:43:21.910075   46587 start.go:360] acquireMachinesLock for multinode-254635: {Name:mk5cb9b294e703b264c1f97ac968ffd01e93b576 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 11:43:21.910130   46587 start.go:364] duration metric: took 30.53µs to acquireMachinesLock for "multinode-254635"
	I0422 11:43:21.910150   46587 start.go:96] Skipping create...Using existing machine configuration
	I0422 11:43:21.910158   46587 fix.go:54] fixHost starting: 
	I0422 11:43:21.910437   46587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:43:21.910460   46587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:43:21.924478   46587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34175
	I0422 11:43:21.924915   46587 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:43:21.925413   46587 main.go:141] libmachine: Using API Version  1
	I0422 11:43:21.925431   46587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:43:21.925762   46587 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:43:21.925955   46587 main.go:141] libmachine: (multinode-254635) Calling .DriverName
	I0422 11:43:21.926085   46587 main.go:141] libmachine: (multinode-254635) Calling .GetState
	I0422 11:43:21.927583   46587 fix.go:112] recreateIfNeeded on multinode-254635: state=Running err=<nil>
	W0422 11:43:21.927607   46587 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 11:43:21.930764   46587 out.go:177] * Updating the running kvm2 "multinode-254635" VM ...
	I0422 11:43:21.932126   46587 machine.go:94] provisionDockerMachine start ...
	I0422 11:43:21.932151   46587 main.go:141] libmachine: (multinode-254635) Calling .DriverName
	I0422 11:43:21.932355   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHHostname
	I0422 11:43:21.934919   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:43:21.935372   46587 main.go:141] libmachine: (multinode-254635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:1f:f6", ip: ""} in network mk-multinode-254635: {Iface:virbr1 ExpiryTime:2024-04-22 12:37:44 +0000 UTC Type:0 Mac:52:54:00:e2:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-254635 Clientid:01:52:54:00:e2:1f:f6}
	I0422 11:43:21.935399   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined IP address 192.168.39.185 and MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:43:21.935563   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHPort
	I0422 11:43:21.935752   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHKeyPath
	I0422 11:43:21.935911   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHKeyPath
	I0422 11:43:21.936048   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHUsername
	I0422 11:43:21.936190   46587 main.go:141] libmachine: Using SSH client type: native
	I0422 11:43:21.936362   46587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0422 11:43:21.936373   46587 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 11:43:22.046603   46587 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-254635
	
	I0422 11:43:22.046635   46587 main.go:141] libmachine: (multinode-254635) Calling .GetMachineName
	I0422 11:43:22.046887   46587 buildroot.go:166] provisioning hostname "multinode-254635"
	I0422 11:43:22.046914   46587 main.go:141] libmachine: (multinode-254635) Calling .GetMachineName
	I0422 11:43:22.047110   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHHostname
	I0422 11:43:22.050100   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:43:22.050513   46587 main.go:141] libmachine: (multinode-254635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:1f:f6", ip: ""} in network mk-multinode-254635: {Iface:virbr1 ExpiryTime:2024-04-22 12:37:44 +0000 UTC Type:0 Mac:52:54:00:e2:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-254635 Clientid:01:52:54:00:e2:1f:f6}
	I0422 11:43:22.050533   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined IP address 192.168.39.185 and MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:43:22.050668   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHPort
	I0422 11:43:22.050855   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHKeyPath
	I0422 11:43:22.051002   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHKeyPath
	I0422 11:43:22.051266   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHUsername
	I0422 11:43:22.051419   46587 main.go:141] libmachine: Using SSH client type: native
	I0422 11:43:22.051584   46587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0422 11:43:22.051597   46587 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-254635 && echo "multinode-254635" | sudo tee /etc/hostname
	I0422 11:43:22.180915   46587 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-254635
	
	I0422 11:43:22.180944   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHHostname
	I0422 11:43:22.183744   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:43:22.184156   46587 main.go:141] libmachine: (multinode-254635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:1f:f6", ip: ""} in network mk-multinode-254635: {Iface:virbr1 ExpiryTime:2024-04-22 12:37:44 +0000 UTC Type:0 Mac:52:54:00:e2:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-254635 Clientid:01:52:54:00:e2:1f:f6}
	I0422 11:43:22.184182   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined IP address 192.168.39.185 and MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:43:22.184398   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHPort
	I0422 11:43:22.184628   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHKeyPath
	I0422 11:43:22.184801   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHKeyPath
	I0422 11:43:22.184948   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHUsername
	I0422 11:43:22.185170   46587 main.go:141] libmachine: Using SSH client type: native
	I0422 11:43:22.185413   46587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0422 11:43:22.185440   46587 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-254635' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-254635/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-254635' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 11:43:22.294704   46587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 11:43:22.294731   46587 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18711-7633/.minikube CaCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18711-7633/.minikube}
	I0422 11:43:22.294746   46587 buildroot.go:174] setting up certificates
	I0422 11:43:22.294753   46587 provision.go:84] configureAuth start
	I0422 11:43:22.294761   46587 main.go:141] libmachine: (multinode-254635) Calling .GetMachineName
	I0422 11:43:22.295006   46587 main.go:141] libmachine: (multinode-254635) Calling .GetIP
	I0422 11:43:22.297654   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:43:22.298082   46587 main.go:141] libmachine: (multinode-254635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:1f:f6", ip: ""} in network mk-multinode-254635: {Iface:virbr1 ExpiryTime:2024-04-22 12:37:44 +0000 UTC Type:0 Mac:52:54:00:e2:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-254635 Clientid:01:52:54:00:e2:1f:f6}
	I0422 11:43:22.298115   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined IP address 192.168.39.185 and MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:43:22.298192   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHHostname
	I0422 11:43:22.300209   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:43:22.300548   46587 main.go:141] libmachine: (multinode-254635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:1f:f6", ip: ""} in network mk-multinode-254635: {Iface:virbr1 ExpiryTime:2024-04-22 12:37:44 +0000 UTC Type:0 Mac:52:54:00:e2:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-254635 Clientid:01:52:54:00:e2:1f:f6}
	I0422 11:43:22.300586   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined IP address 192.168.39.185 and MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:43:22.300738   46587 provision.go:143] copyHostCerts
	I0422 11:43:22.300785   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem
	I0422 11:43:22.300822   46587 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem, removing ...
	I0422 11:43:22.300833   46587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem
	I0422 11:43:22.300915   46587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem (1078 bytes)
	I0422 11:43:22.301029   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem
	I0422 11:43:22.301058   46587 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem, removing ...
	I0422 11:43:22.301064   46587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem
	I0422 11:43:22.301107   46587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem (1123 bytes)
	I0422 11:43:22.301178   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem
	I0422 11:43:22.301208   46587 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem, removing ...
	I0422 11:43:22.301213   46587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem
	I0422 11:43:22.301246   46587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem (1679 bytes)
	I0422 11:43:22.301299   46587 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem org=jenkins.multinode-254635 san=[127.0.0.1 192.168.39.185 localhost minikube multinode-254635]
	I0422 11:43:22.364528   46587 provision.go:177] copyRemoteCerts
	I0422 11:43:22.364603   46587 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 11:43:22.364632   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHHostname
	I0422 11:43:22.367401   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:43:22.367781   46587 main.go:141] libmachine: (multinode-254635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:1f:f6", ip: ""} in network mk-multinode-254635: {Iface:virbr1 ExpiryTime:2024-04-22 12:37:44 +0000 UTC Type:0 Mac:52:54:00:e2:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-254635 Clientid:01:52:54:00:e2:1f:f6}
	I0422 11:43:22.367808   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined IP address 192.168.39.185 and MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:43:22.368014   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHPort
	I0422 11:43:22.368197   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHKeyPath
	I0422 11:43:22.368413   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHUsername
	I0422 11:43:22.368559   46587 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/multinode-254635/id_rsa Username:docker}
	I0422 11:43:22.456834   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0422 11:43:22.456901   46587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 11:43:22.485546   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0422 11:43:22.485651   46587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0422 11:43:22.513757   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0422 11:43:22.513838   46587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0422 11:43:22.541468   46587 provision.go:87] duration metric: took 246.700467ms to configureAuth
	I0422 11:43:22.541499   46587 buildroot.go:189] setting minikube options for container-runtime
	I0422 11:43:22.541760   46587 config.go:182] Loaded profile config "multinode-254635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:43:22.541856   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHHostname
	I0422 11:43:22.544518   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:43:22.544932   46587 main.go:141] libmachine: (multinode-254635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:1f:f6", ip: ""} in network mk-multinode-254635: {Iface:virbr1 ExpiryTime:2024-04-22 12:37:44 +0000 UTC Type:0 Mac:52:54:00:e2:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-254635 Clientid:01:52:54:00:e2:1f:f6}
	I0422 11:43:22.544958   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined IP address 192.168.39.185 and MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:43:22.545190   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHPort
	I0422 11:43:22.545410   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHKeyPath
	I0422 11:43:22.545582   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHKeyPath
	I0422 11:43:22.545721   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHUsername
	I0422 11:43:22.545870   46587 main.go:141] libmachine: Using SSH client type: native
	I0422 11:43:22.546052   46587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0422 11:43:22.546074   46587 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 11:44:53.249886   46587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 11:44:53.249910   46587 machine.go:97] duration metric: took 1m31.317768662s to provisionDockerMachine
	I0422 11:44:53.249923   46587 start.go:293] postStartSetup for "multinode-254635" (driver="kvm2")
	I0422 11:44:53.249933   46587 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 11:44:53.249954   46587 main.go:141] libmachine: (multinode-254635) Calling .DriverName
	I0422 11:44:53.250265   46587 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 11:44:53.250286   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHHostname
	I0422 11:44:53.253445   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:44:53.253933   46587 main.go:141] libmachine: (multinode-254635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:1f:f6", ip: ""} in network mk-multinode-254635: {Iface:virbr1 ExpiryTime:2024-04-22 12:37:44 +0000 UTC Type:0 Mac:52:54:00:e2:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-254635 Clientid:01:52:54:00:e2:1f:f6}
	I0422 11:44:53.253973   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined IP address 192.168.39.185 and MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:44:53.254106   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHPort
	I0422 11:44:53.254297   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHKeyPath
	I0422 11:44:53.254470   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHUsername
	I0422 11:44:53.254600   46587 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/multinode-254635/id_rsa Username:docker}
	I0422 11:44:53.342751   46587 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 11:44:53.348163   46587 command_runner.go:130] > NAME=Buildroot
	I0422 11:44:53.348187   46587 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0422 11:44:53.348193   46587 command_runner.go:130] > ID=buildroot
	I0422 11:44:53.348200   46587 command_runner.go:130] > VERSION_ID=2023.02.9
	I0422 11:44:53.348207   46587 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0422 11:44:53.348245   46587 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 11:44:53.348260   46587 filesync.go:126] Scanning /home/jenkins/minikube-integration/18711-7633/.minikube/addons for local assets ...
	I0422 11:44:53.348322   46587 filesync.go:126] Scanning /home/jenkins/minikube-integration/18711-7633/.minikube/files for local assets ...
	I0422 11:44:53.348415   46587 filesync.go:149] local asset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> 149452.pem in /etc/ssl/certs
	I0422 11:44:53.348427   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> /etc/ssl/certs/149452.pem
	I0422 11:44:53.348585   46587 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 11:44:53.360192   46587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem --> /etc/ssl/certs/149452.pem (1708 bytes)
	I0422 11:44:53.387595   46587 start.go:296] duration metric: took 137.660281ms for postStartSetup
	I0422 11:44:53.387630   46587 fix.go:56] duration metric: took 1m31.477473792s for fixHost
	I0422 11:44:53.387655   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHHostname
	I0422 11:44:53.390550   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:44:53.390991   46587 main.go:141] libmachine: (multinode-254635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:1f:f6", ip: ""} in network mk-multinode-254635: {Iface:virbr1 ExpiryTime:2024-04-22 12:37:44 +0000 UTC Type:0 Mac:52:54:00:e2:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-254635 Clientid:01:52:54:00:e2:1f:f6}
	I0422 11:44:53.391013   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined IP address 192.168.39.185 and MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:44:53.391250   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHPort
	I0422 11:44:53.391469   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHKeyPath
	I0422 11:44:53.391632   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHKeyPath
	I0422 11:44:53.391822   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHUsername
	I0422 11:44:53.391994   46587 main.go:141] libmachine: Using SSH client type: native
	I0422 11:44:53.392143   46587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0422 11:44:53.392153   46587 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 11:44:53.498148   46587 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713786293.486659664
	
	I0422 11:44:53.498178   46587 fix.go:216] guest clock: 1713786293.486659664
	I0422 11:44:53.498186   46587 fix.go:229] Guest: 2024-04-22 11:44:53.486659664 +0000 UTC Remote: 2024-04-22 11:44:53.387634623 +0000 UTC m=+91.623189634 (delta=99.025041ms)
	I0422 11:44:53.498224   46587 fix.go:200] guest clock delta is within tolerance: 99.025041ms
	I0422 11:44:53.498229   46587 start.go:83] releasing machines lock for "multinode-254635", held for 1m31.588085968s
	I0422 11:44:53.498251   46587 main.go:141] libmachine: (multinode-254635) Calling .DriverName
	I0422 11:44:53.498527   46587 main.go:141] libmachine: (multinode-254635) Calling .GetIP
	I0422 11:44:53.501308   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:44:53.501757   46587 main.go:141] libmachine: (multinode-254635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:1f:f6", ip: ""} in network mk-multinode-254635: {Iface:virbr1 ExpiryTime:2024-04-22 12:37:44 +0000 UTC Type:0 Mac:52:54:00:e2:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-254635 Clientid:01:52:54:00:e2:1f:f6}
	I0422 11:44:53.501780   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined IP address 192.168.39.185 and MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:44:53.501928   46587 main.go:141] libmachine: (multinode-254635) Calling .DriverName
	I0422 11:44:53.502490   46587 main.go:141] libmachine: (multinode-254635) Calling .DriverName
	I0422 11:44:53.502700   46587 main.go:141] libmachine: (multinode-254635) Calling .DriverName
	I0422 11:44:53.502786   46587 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 11:44:53.502826   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHHostname
	I0422 11:44:53.502924   46587 ssh_runner.go:195] Run: cat /version.json
	I0422 11:44:53.502953   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHHostname
	I0422 11:44:53.505403   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:44:53.505756   46587 main.go:141] libmachine: (multinode-254635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:1f:f6", ip: ""} in network mk-multinode-254635: {Iface:virbr1 ExpiryTime:2024-04-22 12:37:44 +0000 UTC Type:0 Mac:52:54:00:e2:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-254635 Clientid:01:52:54:00:e2:1f:f6}
	I0422 11:44:53.505781   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined IP address 192.168.39.185 and MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:44:53.505800   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:44:53.505944   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHPort
	I0422 11:44:53.506103   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHKeyPath
	I0422 11:44:53.506238   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHUsername
	I0422 11:44:53.506262   46587 main.go:141] libmachine: (multinode-254635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:1f:f6", ip: ""} in network mk-multinode-254635: {Iface:virbr1 ExpiryTime:2024-04-22 12:37:44 +0000 UTC Type:0 Mac:52:54:00:e2:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-254635 Clientid:01:52:54:00:e2:1f:f6}
	I0422 11:44:53.506300   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined IP address 192.168.39.185 and MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:44:53.506412   46587 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/multinode-254635/id_rsa Username:docker}
	I0422 11:44:53.506486   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHPort
	I0422 11:44:53.506655   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHKeyPath
	I0422 11:44:53.506790   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHUsername
	I0422 11:44:53.506926   46587 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/multinode-254635/id_rsa Username:docker}
	I0422 11:44:53.586540   46587 command_runner.go:130] > {"iso_version": "v1.33.0", "kicbase_version": "v0.0.43-1713236840-18649", "minikube_version": "v1.33.0", "commit": "4bd203f0c710e7fdd30539846cf2bc6624a2556d"}
	I0422 11:44:53.586678   46587 ssh_runner.go:195] Run: systemctl --version
	I0422 11:44:53.615555   46587 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0422 11:44:53.616345   46587 command_runner.go:130] > systemd 252 (252)
	I0422 11:44:53.616378   46587 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0422 11:44:53.616434   46587 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 11:44:53.784536   46587 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0422 11:44:53.799517   46587 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0422 11:44:53.799840   46587 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 11:44:53.799899   46587 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 11:44:53.810413   46587 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0422 11:44:53.810430   46587 start.go:494] detecting cgroup driver to use...
	I0422 11:44:53.810484   46587 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 11:44:53.831515   46587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 11:44:53.847526   46587 docker.go:217] disabling cri-docker service (if available) ...
	I0422 11:44:53.847577   46587 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 11:44:53.863611   46587 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 11:44:53.879312   46587 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 11:44:54.041641   46587 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 11:44:54.183677   46587 docker.go:233] disabling docker service ...
	I0422 11:44:54.183755   46587 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 11:44:54.200976   46587 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 11:44:54.216037   46587 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 11:44:54.361219   46587 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 11:44:54.507567   46587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 11:44:54.522960   46587 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 11:44:54.545728   46587 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0422 11:44:54.545761   46587 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 11:44:54.545810   46587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:44:54.557521   46587 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 11:44:54.557563   46587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:44:54.568658   46587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:44:54.579795   46587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:44:54.590928   46587 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 11:44:54.602678   46587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:44:54.614228   46587 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:44:54.627245   46587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:44:54.638478   46587 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 11:44:54.648891   46587 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0422 11:44:54.648990   46587 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 11:44:54.660198   46587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 11:44:54.815253   46587 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 11:44:55.072476   46587 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 11:44:55.072546   46587 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 11:44:55.078017   46587 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0422 11:44:55.078039   46587 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0422 11:44:55.078046   46587 command_runner.go:130] > Device: 0,22	Inode: 1321        Links: 1
	I0422 11:44:55.078052   46587 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0422 11:44:55.078057   46587 command_runner.go:130] > Access: 2024-04-22 11:44:54.946172638 +0000
	I0422 11:44:55.078063   46587 command_runner.go:130] > Modify: 2024-04-22 11:44:54.946172638 +0000
	I0422 11:44:55.078068   46587 command_runner.go:130] > Change: 2024-04-22 11:44:54.946172638 +0000
	I0422 11:44:55.078072   46587 command_runner.go:130] >  Birth: -
	I0422 11:44:55.078202   46587 start.go:562] Will wait 60s for crictl version
	I0422 11:44:55.078261   46587 ssh_runner.go:195] Run: which crictl
	I0422 11:44:55.082469   46587 command_runner.go:130] > /usr/bin/crictl
	I0422 11:44:55.082682   46587 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 11:44:55.127824   46587 command_runner.go:130] > Version:  0.1.0
	I0422 11:44:55.127847   46587 command_runner.go:130] > RuntimeName:  cri-o
	I0422 11:44:55.127852   46587 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0422 11:44:55.127857   46587 command_runner.go:130] > RuntimeApiVersion:  v1
	I0422 11:44:55.128077   46587 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 11:44:55.128150   46587 ssh_runner.go:195] Run: crio --version
	I0422 11:44:55.162596   46587 command_runner.go:130] > crio version 1.29.1
	I0422 11:44:55.162619   46587 command_runner.go:130] > Version:        1.29.1
	I0422 11:44:55.162625   46587 command_runner.go:130] > GitCommit:      unknown
	I0422 11:44:55.162630   46587 command_runner.go:130] > GitCommitDate:  unknown
	I0422 11:44:55.162634   46587 command_runner.go:130] > GitTreeState:   clean
	I0422 11:44:55.162640   46587 command_runner.go:130] > BuildDate:      2024-04-18T23:15:22Z
	I0422 11:44:55.162644   46587 command_runner.go:130] > GoVersion:      go1.21.6
	I0422 11:44:55.162648   46587 command_runner.go:130] > Compiler:       gc
	I0422 11:44:55.162652   46587 command_runner.go:130] > Platform:       linux/amd64
	I0422 11:44:55.162656   46587 command_runner.go:130] > Linkmode:       dynamic
	I0422 11:44:55.162673   46587 command_runner.go:130] > BuildTags:      
	I0422 11:44:55.162677   46587 command_runner.go:130] >   containers_image_ostree_stub
	I0422 11:44:55.162682   46587 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0422 11:44:55.162691   46587 command_runner.go:130] >   btrfs_noversion
	I0422 11:44:55.162695   46587 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0422 11:44:55.162699   46587 command_runner.go:130] >   libdm_no_deferred_remove
	I0422 11:44:55.162703   46587 command_runner.go:130] >   seccomp
	I0422 11:44:55.162708   46587 command_runner.go:130] > LDFlags:          unknown
	I0422 11:44:55.162718   46587 command_runner.go:130] > SeccompEnabled:   true
	I0422 11:44:55.162730   46587 command_runner.go:130] > AppArmorEnabled:  false
	I0422 11:44:55.162809   46587 ssh_runner.go:195] Run: crio --version
	I0422 11:44:55.193470   46587 command_runner.go:130] > crio version 1.29.1
	I0422 11:44:55.193497   46587 command_runner.go:130] > Version:        1.29.1
	I0422 11:44:55.193506   46587 command_runner.go:130] > GitCommit:      unknown
	I0422 11:44:55.193512   46587 command_runner.go:130] > GitCommitDate:  unknown
	I0422 11:44:55.193521   46587 command_runner.go:130] > GitTreeState:   clean
	I0422 11:44:55.193530   46587 command_runner.go:130] > BuildDate:      2024-04-18T23:15:22Z
	I0422 11:44:55.193536   46587 command_runner.go:130] > GoVersion:      go1.21.6
	I0422 11:44:55.193542   46587 command_runner.go:130] > Compiler:       gc
	I0422 11:44:55.193550   46587 command_runner.go:130] > Platform:       linux/amd64
	I0422 11:44:55.193559   46587 command_runner.go:130] > Linkmode:       dynamic
	I0422 11:44:55.193585   46587 command_runner.go:130] > BuildTags:      
	I0422 11:44:55.193595   46587 command_runner.go:130] >   containers_image_ostree_stub
	I0422 11:44:55.193602   46587 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0422 11:44:55.193609   46587 command_runner.go:130] >   btrfs_noversion
	I0422 11:44:55.193619   46587 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0422 11:44:55.193628   46587 command_runner.go:130] >   libdm_no_deferred_remove
	I0422 11:44:55.193635   46587 command_runner.go:130] >   seccomp
	I0422 11:44:55.193644   46587 command_runner.go:130] > LDFlags:          unknown
	I0422 11:44:55.193653   46587 command_runner.go:130] > SeccompEnabled:   true
	I0422 11:44:55.193661   46587 command_runner.go:130] > AppArmorEnabled:  false
	I0422 11:44:55.197104   46587 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 11:44:55.198507   46587 main.go:141] libmachine: (multinode-254635) Calling .GetIP
	I0422 11:44:55.201216   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:44:55.201596   46587 main.go:141] libmachine: (multinode-254635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:1f:f6", ip: ""} in network mk-multinode-254635: {Iface:virbr1 ExpiryTime:2024-04-22 12:37:44 +0000 UTC Type:0 Mac:52:54:00:e2:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-254635 Clientid:01:52:54:00:e2:1f:f6}
	I0422 11:44:55.201625   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined IP address 192.168.39.185 and MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:44:55.201827   46587 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0422 11:44:55.206612   46587 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0422 11:44:55.206932   46587 kubeadm.go:877] updating cluster {Name:multinode-254635 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-254
635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.75 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false isti
o:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 11:44:55.207074   46587 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 11:44:55.207128   46587 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 11:44:55.256756   46587 command_runner.go:130] > {
	I0422 11:44:55.256798   46587 command_runner.go:130] >   "images": [
	I0422 11:44:55.256804   46587 command_runner.go:130] >     {
	I0422 11:44:55.256818   46587 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0422 11:44:55.256825   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.256834   46587 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0422 11:44:55.256842   46587 command_runner.go:130] >       ],
	I0422 11:44:55.256850   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.256871   46587 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0422 11:44:55.256884   46587 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0422 11:44:55.256893   46587 command_runner.go:130] >       ],
	I0422 11:44:55.256899   46587 command_runner.go:130] >       "size": "65291810",
	I0422 11:44:55.256908   46587 command_runner.go:130] >       "uid": null,
	I0422 11:44:55.256916   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.256929   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.256939   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.256944   46587 command_runner.go:130] >     },
	I0422 11:44:55.256953   46587 command_runner.go:130] >     {
	I0422 11:44:55.256962   46587 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0422 11:44:55.256971   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.256979   46587 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0422 11:44:55.256989   46587 command_runner.go:130] >       ],
	I0422 11:44:55.256995   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.257010   46587 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0422 11:44:55.257024   46587 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0422 11:44:55.257032   46587 command_runner.go:130] >       ],
	I0422 11:44:55.257039   46587 command_runner.go:130] >       "size": "1363676",
	I0422 11:44:55.257048   46587 command_runner.go:130] >       "uid": null,
	I0422 11:44:55.257058   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.257067   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.257072   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.257081   46587 command_runner.go:130] >     },
	I0422 11:44:55.257086   46587 command_runner.go:130] >     {
	I0422 11:44:55.257096   46587 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0422 11:44:55.257105   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.257113   46587 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0422 11:44:55.257122   46587 command_runner.go:130] >       ],
	I0422 11:44:55.257132   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.257146   46587 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0422 11:44:55.257161   46587 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0422 11:44:55.257171   46587 command_runner.go:130] >       ],
	I0422 11:44:55.257180   46587 command_runner.go:130] >       "size": "31470524",
	I0422 11:44:55.257192   46587 command_runner.go:130] >       "uid": null,
	I0422 11:44:55.257201   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.257212   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.257221   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.257230   46587 command_runner.go:130] >     },
	I0422 11:44:55.257239   46587 command_runner.go:130] >     {
	I0422 11:44:55.257251   46587 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0422 11:44:55.257284   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.257295   46587 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0422 11:44:55.257303   46587 command_runner.go:130] >       ],
	I0422 11:44:55.257312   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.257323   46587 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0422 11:44:55.257342   46587 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0422 11:44:55.257356   46587 command_runner.go:130] >       ],
	I0422 11:44:55.257363   46587 command_runner.go:130] >       "size": "61245718",
	I0422 11:44:55.257369   46587 command_runner.go:130] >       "uid": null,
	I0422 11:44:55.257379   46587 command_runner.go:130] >       "username": "nonroot",
	I0422 11:44:55.257385   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.257395   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.257403   46587 command_runner.go:130] >     },
	I0422 11:44:55.257411   46587 command_runner.go:130] >     {
	I0422 11:44:55.257422   46587 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0422 11:44:55.257431   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.257442   46587 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0422 11:44:55.257450   46587 command_runner.go:130] >       ],
	I0422 11:44:55.257459   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.257472   46587 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0422 11:44:55.257485   46587 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0422 11:44:55.257495   46587 command_runner.go:130] >       ],
	I0422 11:44:55.257505   46587 command_runner.go:130] >       "size": "150779692",
	I0422 11:44:55.257513   46587 command_runner.go:130] >       "uid": {
	I0422 11:44:55.257518   46587 command_runner.go:130] >         "value": "0"
	I0422 11:44:55.257527   46587 command_runner.go:130] >       },
	I0422 11:44:55.257536   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.257544   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.257551   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.257555   46587 command_runner.go:130] >     },
	I0422 11:44:55.257559   46587 command_runner.go:130] >     {
	I0422 11:44:55.257565   46587 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0422 11:44:55.257572   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.257577   46587 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0422 11:44:55.257583   46587 command_runner.go:130] >       ],
	I0422 11:44:55.257588   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.257597   46587 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0422 11:44:55.257607   46587 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0422 11:44:55.257613   46587 command_runner.go:130] >       ],
	I0422 11:44:55.257617   46587 command_runner.go:130] >       "size": "117609952",
	I0422 11:44:55.257623   46587 command_runner.go:130] >       "uid": {
	I0422 11:44:55.257627   46587 command_runner.go:130] >         "value": "0"
	I0422 11:44:55.257633   46587 command_runner.go:130] >       },
	I0422 11:44:55.257637   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.257640   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.257645   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.257650   46587 command_runner.go:130] >     },
	I0422 11:44:55.257658   46587 command_runner.go:130] >     {
	I0422 11:44:55.257667   46587 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0422 11:44:55.257679   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.257691   46587 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0422 11:44:55.257699   46587 command_runner.go:130] >       ],
	I0422 11:44:55.257706   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.257721   46587 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0422 11:44:55.257736   46587 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0422 11:44:55.257748   46587 command_runner.go:130] >       ],
	I0422 11:44:55.257758   46587 command_runner.go:130] >       "size": "112170310",
	I0422 11:44:55.257766   46587 command_runner.go:130] >       "uid": {
	I0422 11:44:55.257776   46587 command_runner.go:130] >         "value": "0"
	I0422 11:44:55.257785   46587 command_runner.go:130] >       },
	I0422 11:44:55.257794   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.257803   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.257810   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.257814   46587 command_runner.go:130] >     },
	I0422 11:44:55.257820   46587 command_runner.go:130] >     {
	I0422 11:44:55.257828   46587 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0422 11:44:55.257835   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.257839   46587 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0422 11:44:55.257845   46587 command_runner.go:130] >       ],
	I0422 11:44:55.257849   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.257897   46587 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0422 11:44:55.257910   46587 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0422 11:44:55.257914   46587 command_runner.go:130] >       ],
	I0422 11:44:55.257919   46587 command_runner.go:130] >       "size": "85932953",
	I0422 11:44:55.257923   46587 command_runner.go:130] >       "uid": null,
	I0422 11:44:55.257930   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.257934   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.257940   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.257944   46587 command_runner.go:130] >     },
	I0422 11:44:55.257948   46587 command_runner.go:130] >     {
	I0422 11:44:55.257953   46587 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0422 11:44:55.257957   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.257962   46587 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0422 11:44:55.257966   46587 command_runner.go:130] >       ],
	I0422 11:44:55.257969   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.257976   46587 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0422 11:44:55.257983   46587 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0422 11:44:55.257986   46587 command_runner.go:130] >       ],
	I0422 11:44:55.257990   46587 command_runner.go:130] >       "size": "63026502",
	I0422 11:44:55.257994   46587 command_runner.go:130] >       "uid": {
	I0422 11:44:55.257997   46587 command_runner.go:130] >         "value": "0"
	I0422 11:44:55.258001   46587 command_runner.go:130] >       },
	I0422 11:44:55.258005   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.258008   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.258012   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.258015   46587 command_runner.go:130] >     },
	I0422 11:44:55.258018   46587 command_runner.go:130] >     {
	I0422 11:44:55.258024   46587 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0422 11:44:55.258034   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.258038   46587 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0422 11:44:55.258041   46587 command_runner.go:130] >       ],
	I0422 11:44:55.258046   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.258056   46587 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0422 11:44:55.258064   46587 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0422 11:44:55.258070   46587 command_runner.go:130] >       ],
	I0422 11:44:55.258074   46587 command_runner.go:130] >       "size": "750414",
	I0422 11:44:55.258080   46587 command_runner.go:130] >       "uid": {
	I0422 11:44:55.258085   46587 command_runner.go:130] >         "value": "65535"
	I0422 11:44:55.258090   46587 command_runner.go:130] >       },
	I0422 11:44:55.258094   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.258100   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.258105   46587 command_runner.go:130] >       "pinned": true
	I0422 11:44:55.258111   46587 command_runner.go:130] >     }
	I0422 11:44:55.258114   46587 command_runner.go:130] >   ]
	I0422 11:44:55.258118   46587 command_runner.go:130] > }
	I0422 11:44:55.258300   46587 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 11:44:55.258312   46587 crio.go:433] Images already preloaded, skipping extraction
	I0422 11:44:55.258352   46587 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 11:44:55.296509   46587 command_runner.go:130] > {
	I0422 11:44:55.296529   46587 command_runner.go:130] >   "images": [
	I0422 11:44:55.296535   46587 command_runner.go:130] >     {
	I0422 11:44:55.296547   46587 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0422 11:44:55.296553   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.296562   46587 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0422 11:44:55.296567   46587 command_runner.go:130] >       ],
	I0422 11:44:55.296573   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.296585   46587 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0422 11:44:55.296597   46587 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0422 11:44:55.296607   46587 command_runner.go:130] >       ],
	I0422 11:44:55.296615   46587 command_runner.go:130] >       "size": "65291810",
	I0422 11:44:55.296623   46587 command_runner.go:130] >       "uid": null,
	I0422 11:44:55.296632   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.296650   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.296660   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.296666   46587 command_runner.go:130] >     },
	I0422 11:44:55.296673   46587 command_runner.go:130] >     {
	I0422 11:44:55.296688   46587 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0422 11:44:55.296698   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.296709   46587 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0422 11:44:55.296718   46587 command_runner.go:130] >       ],
	I0422 11:44:55.296725   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.296737   46587 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0422 11:44:55.296750   46587 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0422 11:44:55.296759   46587 command_runner.go:130] >       ],
	I0422 11:44:55.296767   46587 command_runner.go:130] >       "size": "1363676",
	I0422 11:44:55.296791   46587 command_runner.go:130] >       "uid": null,
	I0422 11:44:55.296804   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.296813   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.296820   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.296826   46587 command_runner.go:130] >     },
	I0422 11:44:55.296832   46587 command_runner.go:130] >     {
	I0422 11:44:55.296842   46587 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0422 11:44:55.296852   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.296871   46587 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0422 11:44:55.296878   46587 command_runner.go:130] >       ],
	I0422 11:44:55.296889   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.296904   46587 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0422 11:44:55.296920   46587 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0422 11:44:55.296929   46587 command_runner.go:130] >       ],
	I0422 11:44:55.296937   46587 command_runner.go:130] >       "size": "31470524",
	I0422 11:44:55.296947   46587 command_runner.go:130] >       "uid": null,
	I0422 11:44:55.296956   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.296966   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.296975   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.296983   46587 command_runner.go:130] >     },
	I0422 11:44:55.296990   46587 command_runner.go:130] >     {
	I0422 11:44:55.297003   46587 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0422 11:44:55.297011   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.297023   46587 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0422 11:44:55.297032   46587 command_runner.go:130] >       ],
	I0422 11:44:55.297040   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.297056   46587 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0422 11:44:55.297077   46587 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0422 11:44:55.297086   46587 command_runner.go:130] >       ],
	I0422 11:44:55.297094   46587 command_runner.go:130] >       "size": "61245718",
	I0422 11:44:55.297100   46587 command_runner.go:130] >       "uid": null,
	I0422 11:44:55.297110   46587 command_runner.go:130] >       "username": "nonroot",
	I0422 11:44:55.297122   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.297132   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.297140   46587 command_runner.go:130] >     },
	I0422 11:44:55.297149   46587 command_runner.go:130] >     {
	I0422 11:44:55.297160   46587 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0422 11:44:55.297171   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.297182   46587 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0422 11:44:55.297188   46587 command_runner.go:130] >       ],
	I0422 11:44:55.297198   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.297212   46587 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0422 11:44:55.297227   46587 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0422 11:44:55.297235   46587 command_runner.go:130] >       ],
	I0422 11:44:55.297243   46587 command_runner.go:130] >       "size": "150779692",
	I0422 11:44:55.297252   46587 command_runner.go:130] >       "uid": {
	I0422 11:44:55.297260   46587 command_runner.go:130] >         "value": "0"
	I0422 11:44:55.297268   46587 command_runner.go:130] >       },
	I0422 11:44:55.297276   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.297286   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.297295   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.297301   46587 command_runner.go:130] >     },
	I0422 11:44:55.297311   46587 command_runner.go:130] >     {
	I0422 11:44:55.297322   46587 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0422 11:44:55.297338   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.297351   46587 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0422 11:44:55.297360   46587 command_runner.go:130] >       ],
	I0422 11:44:55.297367   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.297383   46587 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0422 11:44:55.297399   46587 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0422 11:44:55.297409   46587 command_runner.go:130] >       ],
	I0422 11:44:55.297417   46587 command_runner.go:130] >       "size": "117609952",
	I0422 11:44:55.297428   46587 command_runner.go:130] >       "uid": {
	I0422 11:44:55.297437   46587 command_runner.go:130] >         "value": "0"
	I0422 11:44:55.297445   46587 command_runner.go:130] >       },
	I0422 11:44:55.297453   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.297461   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.297469   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.297477   46587 command_runner.go:130] >     },
	I0422 11:44:55.297483   46587 command_runner.go:130] >     {
	I0422 11:44:55.297494   46587 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0422 11:44:55.297505   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.297518   46587 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0422 11:44:55.297527   46587 command_runner.go:130] >       ],
	I0422 11:44:55.297534   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.297551   46587 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0422 11:44:55.297567   46587 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0422 11:44:55.297579   46587 command_runner.go:130] >       ],
	I0422 11:44:55.297591   46587 command_runner.go:130] >       "size": "112170310",
	I0422 11:44:55.297598   46587 command_runner.go:130] >       "uid": {
	I0422 11:44:55.297606   46587 command_runner.go:130] >         "value": "0"
	I0422 11:44:55.297616   46587 command_runner.go:130] >       },
	I0422 11:44:55.297624   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.297634   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.297643   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.297652   46587 command_runner.go:130] >     },
	I0422 11:44:55.297660   46587 command_runner.go:130] >     {
	I0422 11:44:55.297673   46587 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0422 11:44:55.297684   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.297695   46587 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0422 11:44:55.297702   46587 command_runner.go:130] >       ],
	I0422 11:44:55.297711   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.297732   46587 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0422 11:44:55.297749   46587 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0422 11:44:55.297755   46587 command_runner.go:130] >       ],
	I0422 11:44:55.297765   46587 command_runner.go:130] >       "size": "85932953",
	I0422 11:44:55.297773   46587 command_runner.go:130] >       "uid": null,
	I0422 11:44:55.297783   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.297791   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.297800   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.297806   46587 command_runner.go:130] >     },
	I0422 11:44:55.297815   46587 command_runner.go:130] >     {
	I0422 11:44:55.297827   46587 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0422 11:44:55.297836   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.297845   46587 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0422 11:44:55.297854   46587 command_runner.go:130] >       ],
	I0422 11:44:55.297861   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.297876   46587 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0422 11:44:55.297892   46587 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0422 11:44:55.297901   46587 command_runner.go:130] >       ],
	I0422 11:44:55.297909   46587 command_runner.go:130] >       "size": "63026502",
	I0422 11:44:55.297919   46587 command_runner.go:130] >       "uid": {
	I0422 11:44:55.297927   46587 command_runner.go:130] >         "value": "0"
	I0422 11:44:55.297933   46587 command_runner.go:130] >       },
	I0422 11:44:55.297942   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.297949   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.297956   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.297965   46587 command_runner.go:130] >     },
	I0422 11:44:55.297973   46587 command_runner.go:130] >     {
	I0422 11:44:55.297984   46587 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0422 11:44:55.297993   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.298002   46587 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0422 11:44:55.298010   46587 command_runner.go:130] >       ],
	I0422 11:44:55.298018   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.298033   46587 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0422 11:44:55.298052   46587 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0422 11:44:55.298060   46587 command_runner.go:130] >       ],
	I0422 11:44:55.298068   46587 command_runner.go:130] >       "size": "750414",
	I0422 11:44:55.298077   46587 command_runner.go:130] >       "uid": {
	I0422 11:44:55.298084   46587 command_runner.go:130] >         "value": "65535"
	I0422 11:44:55.298092   46587 command_runner.go:130] >       },
	I0422 11:44:55.298099   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.298109   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.298120   46587 command_runner.go:130] >       "pinned": true
	I0422 11:44:55.298128   46587 command_runner.go:130] >     }
	I0422 11:44:55.298138   46587 command_runner.go:130] >   ]
	I0422 11:44:55.298145   46587 command_runner.go:130] > }
	I0422 11:44:55.298259   46587 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 11:44:55.298270   46587 cache_images.go:84] Images are preloaded, skipping loading
	I0422 11:44:55.298281   46587 kubeadm.go:928] updating node { 192.168.39.185 8443 v1.30.0 crio true true} ...
	I0422 11:44:55.298407   46587 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-254635 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-254635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 11:44:55.298484   46587 ssh_runner.go:195] Run: crio config
	I0422 11:44:55.343488   46587 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0422 11:44:55.343518   46587 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0422 11:44:55.343528   46587 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0422 11:44:55.343532   46587 command_runner.go:130] > #
	I0422 11:44:55.343541   46587 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0422 11:44:55.343550   46587 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0422 11:44:55.343558   46587 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0422 11:44:55.343568   46587 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0422 11:44:55.343576   46587 command_runner.go:130] > # reload'.
	I0422 11:44:55.343591   46587 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0422 11:44:55.343602   46587 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0422 11:44:55.343616   46587 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0422 11:44:55.343627   46587 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0422 11:44:55.343639   46587 command_runner.go:130] > [crio]
	I0422 11:44:55.343650   46587 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0422 11:44:55.343660   46587 command_runner.go:130] > # containers images, in this directory.
	I0422 11:44:55.343693   46587 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0422 11:44:55.343729   46587 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0422 11:44:55.343998   46587 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0422 11:44:55.344014   46587 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0422 11:44:55.344265   46587 command_runner.go:130] > # imagestore = ""
	I0422 11:44:55.344283   46587 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0422 11:44:55.344297   46587 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0422 11:44:55.344431   46587 command_runner.go:130] > storage_driver = "overlay"
	I0422 11:44:55.344445   46587 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0422 11:44:55.344454   46587 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0422 11:44:55.344460   46587 command_runner.go:130] > storage_option = [
	I0422 11:44:55.344602   46587 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0422 11:44:55.344680   46587 command_runner.go:130] > ]
	I0422 11:44:55.344694   46587 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0422 11:44:55.344704   46587 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0422 11:44:55.345120   46587 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0422 11:44:55.345137   46587 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0422 11:44:55.345147   46587 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0422 11:44:55.345154   46587 command_runner.go:130] > # always happen on a node reboot
	I0422 11:44:55.345432   46587 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0422 11:44:55.345451   46587 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0422 11:44:55.345461   46587 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0422 11:44:55.345472   46587 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0422 11:44:55.345591   46587 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0422 11:44:55.345606   46587 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0422 11:44:55.345619   46587 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0422 11:44:55.346024   46587 command_runner.go:130] > # internal_wipe = true
	I0422 11:44:55.346040   46587 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0422 11:44:55.346048   46587 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0422 11:44:55.346406   46587 command_runner.go:130] > # internal_repair = false
	I0422 11:44:55.346426   46587 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0422 11:44:55.346437   46587 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0422 11:44:55.346445   46587 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0422 11:44:55.346698   46587 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0422 11:44:55.346719   46587 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0422 11:44:55.346726   46587 command_runner.go:130] > [crio.api]
	I0422 11:44:55.346735   46587 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0422 11:44:55.347091   46587 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0422 11:44:55.347117   46587 command_runner.go:130] > # IP address on which the stream server will listen.
	I0422 11:44:55.347398   46587 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0422 11:44:55.347414   46587 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0422 11:44:55.347422   46587 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0422 11:44:55.347862   46587 command_runner.go:130] > # stream_port = "0"
	I0422 11:44:55.347875   46587 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0422 11:44:55.348174   46587 command_runner.go:130] > # stream_enable_tls = false
	I0422 11:44:55.348189   46587 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0422 11:44:55.348428   46587 command_runner.go:130] > # stream_idle_timeout = ""
	I0422 11:44:55.348443   46587 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0422 11:44:55.348454   46587 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0422 11:44:55.348462   46587 command_runner.go:130] > # minutes.
	I0422 11:44:55.348681   46587 command_runner.go:130] > # stream_tls_cert = ""
	I0422 11:44:55.348695   46587 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0422 11:44:55.348705   46587 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0422 11:44:55.349035   46587 command_runner.go:130] > # stream_tls_key = ""
	I0422 11:44:55.349050   46587 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0422 11:44:55.349061   46587 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0422 11:44:55.349078   46587 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0422 11:44:55.349263   46587 command_runner.go:130] > # stream_tls_ca = ""
	I0422 11:44:55.349280   46587 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0422 11:44:55.349474   46587 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0422 11:44:55.349495   46587 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0422 11:44:55.349551   46587 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0422 11:44:55.349567   46587 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0422 11:44:55.349577   46587 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0422 11:44:55.349587   46587 command_runner.go:130] > [crio.runtime]
	I0422 11:44:55.349599   46587 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0422 11:44:55.349611   46587 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0422 11:44:55.349619   46587 command_runner.go:130] > # "nofile=1024:2048"
	I0422 11:44:55.349635   46587 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0422 11:44:55.349882   46587 command_runner.go:130] > # default_ulimits = [
	I0422 11:44:55.350157   46587 command_runner.go:130] > # ]
	I0422 11:44:55.350168   46587 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0422 11:44:55.351733   46587 command_runner.go:130] > # no_pivot = false
	I0422 11:44:55.351749   46587 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0422 11:44:55.351759   46587 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0422 11:44:55.351770   46587 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0422 11:44:55.351779   46587 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0422 11:44:55.351804   46587 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0422 11:44:55.351817   46587 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0422 11:44:55.351825   46587 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0422 11:44:55.351832   46587 command_runner.go:130] > # Cgroup setting for conmon
	I0422 11:44:55.351844   46587 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0422 11:44:55.351854   46587 command_runner.go:130] > conmon_cgroup = "pod"
	I0422 11:44:55.351868   46587 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0422 11:44:55.351879   46587 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0422 11:44:55.351892   46587 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0422 11:44:55.351901   46587 command_runner.go:130] > conmon_env = [
	I0422 11:44:55.351914   46587 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0422 11:44:55.351922   46587 command_runner.go:130] > ]
	I0422 11:44:55.351934   46587 command_runner.go:130] > # Additional environment variables to set for all the
	I0422 11:44:55.351944   46587 command_runner.go:130] > # containers. These are overridden if set in the
	I0422 11:44:55.351956   46587 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0422 11:44:55.351966   46587 command_runner.go:130] > # default_env = [
	I0422 11:44:55.351974   46587 command_runner.go:130] > # ]
	I0422 11:44:55.351983   46587 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0422 11:44:55.352003   46587 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0422 11:44:55.352009   46587 command_runner.go:130] > # selinux = false
	I0422 11:44:55.352020   46587 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0422 11:44:55.352029   46587 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0422 11:44:55.352038   46587 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0422 11:44:55.352045   46587 command_runner.go:130] > # seccomp_profile = ""
	I0422 11:44:55.352053   46587 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0422 11:44:55.352063   46587 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0422 11:44:55.352076   46587 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0422 11:44:55.352086   46587 command_runner.go:130] > # which might increase security.
	I0422 11:44:55.352092   46587 command_runner.go:130] > # This option is currently deprecated,
	I0422 11:44:55.352103   46587 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0422 11:44:55.352113   46587 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0422 11:44:55.352124   46587 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0422 11:44:55.352135   46587 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0422 11:44:55.352147   46587 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0422 11:44:55.352159   46587 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0422 11:44:55.352170   46587 command_runner.go:130] > # This option supports live configuration reload.
	I0422 11:44:55.352185   46587 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0422 11:44:55.352199   46587 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0422 11:44:55.352209   46587 command_runner.go:130] > # the cgroup blockio controller.
	I0422 11:44:55.352220   46587 command_runner.go:130] > # blockio_config_file = ""
	I0422 11:44:55.352233   46587 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0422 11:44:55.352243   46587 command_runner.go:130] > # blockio parameters.
	I0422 11:44:55.352254   46587 command_runner.go:130] > # blockio_reload = false
	I0422 11:44:55.352268   46587 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0422 11:44:55.352276   46587 command_runner.go:130] > # irqbalance daemon.
	I0422 11:44:55.352287   46587 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0422 11:44:55.352298   46587 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0422 11:44:55.352311   46587 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0422 11:44:55.352326   46587 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0422 11:44:55.352337   46587 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0422 11:44:55.352345   46587 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0422 11:44:55.352355   46587 command_runner.go:130] > # This option supports live configuration reload.
	I0422 11:44:55.352364   46587 command_runner.go:130] > # rdt_config_file = ""
	I0422 11:44:55.352375   46587 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0422 11:44:55.352384   46587 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0422 11:44:55.352405   46587 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0422 11:44:55.352414   46587 command_runner.go:130] > # separate_pull_cgroup = ""
	I0422 11:44:55.352433   46587 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0422 11:44:55.352446   46587 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0422 11:44:55.352455   46587 command_runner.go:130] > # will be added.
	I0422 11:44:55.352465   46587 command_runner.go:130] > # default_capabilities = [
	I0422 11:44:55.352474   46587 command_runner.go:130] > # 	"CHOWN",
	I0422 11:44:55.352483   46587 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0422 11:44:55.352491   46587 command_runner.go:130] > # 	"FSETID",
	I0422 11:44:55.352498   46587 command_runner.go:130] > # 	"FOWNER",
	I0422 11:44:55.352506   46587 command_runner.go:130] > # 	"SETGID",
	I0422 11:44:55.352511   46587 command_runner.go:130] > # 	"SETUID",
	I0422 11:44:55.352518   46587 command_runner.go:130] > # 	"SETPCAP",
	I0422 11:44:55.352525   46587 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0422 11:44:55.352533   46587 command_runner.go:130] > # 	"KILL",
	I0422 11:44:55.352538   46587 command_runner.go:130] > # ]
	I0422 11:44:55.352551   46587 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0422 11:44:55.352566   46587 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0422 11:44:55.352577   46587 command_runner.go:130] > # add_inheritable_capabilities = false
	I0422 11:44:55.352589   46587 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0422 11:44:55.352600   46587 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0422 11:44:55.352609   46587 command_runner.go:130] > default_sysctls = [
	I0422 11:44:55.352622   46587 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0422 11:44:55.352630   46587 command_runner.go:130] > ]
	I0422 11:44:55.352642   46587 command_runner.go:130] > # List of devices on the host that a
	I0422 11:44:55.352653   46587 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0422 11:44:55.352662   46587 command_runner.go:130] > # allowed_devices = [
	I0422 11:44:55.352670   46587 command_runner.go:130] > # 	"/dev/fuse",
	I0422 11:44:55.352678   46587 command_runner.go:130] > # ]
	I0422 11:44:55.352686   46587 command_runner.go:130] > # List of additional devices. specified as
	I0422 11:44:55.352700   46587 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0422 11:44:55.352712   46587 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0422 11:44:55.352723   46587 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0422 11:44:55.352732   46587 command_runner.go:130] > # additional_devices = [
	I0422 11:44:55.352739   46587 command_runner.go:130] > # ]
	I0422 11:44:55.352746   46587 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0422 11:44:55.352754   46587 command_runner.go:130] > # cdi_spec_dirs = [
	I0422 11:44:55.352761   46587 command_runner.go:130] > # 	"/etc/cdi",
	I0422 11:44:55.352779   46587 command_runner.go:130] > # 	"/var/run/cdi",
	I0422 11:44:55.352785   46587 command_runner.go:130] > # ]
	I0422 11:44:55.352795   46587 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0422 11:44:55.352808   46587 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0422 11:44:55.352816   46587 command_runner.go:130] > # Defaults to false.
	I0422 11:44:55.352825   46587 command_runner.go:130] > # device_ownership_from_security_context = false
	I0422 11:44:55.352837   46587 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0422 11:44:55.352849   46587 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0422 11:44:55.352858   46587 command_runner.go:130] > # hooks_dir = [
	I0422 11:44:55.352869   46587 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0422 11:44:55.352877   46587 command_runner.go:130] > # ]
	I0422 11:44:55.352889   46587 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0422 11:44:55.352901   46587 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0422 11:44:55.352910   46587 command_runner.go:130] > # its default mounts from the following two files:
	I0422 11:44:55.352917   46587 command_runner.go:130] > #
	I0422 11:44:55.352925   46587 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0422 11:44:55.352939   46587 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0422 11:44:55.352950   46587 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0422 11:44:55.352958   46587 command_runner.go:130] > #
	I0422 11:44:55.352971   46587 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0422 11:44:55.352984   46587 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0422 11:44:55.352997   46587 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0422 11:44:55.353008   46587 command_runner.go:130] > #      only add mounts it finds in this file.
	I0422 11:44:55.353016   46587 command_runner.go:130] > #
	I0422 11:44:55.353022   46587 command_runner.go:130] > # default_mounts_file = ""
	I0422 11:44:55.353033   46587 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0422 11:44:55.353050   46587 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0422 11:44:55.353059   46587 command_runner.go:130] > pids_limit = 1024
	I0422 11:44:55.353069   46587 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0422 11:44:55.353082   46587 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0422 11:44:55.353095   46587 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0422 11:44:55.353109   46587 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0422 11:44:55.353117   46587 command_runner.go:130] > # log_size_max = -1
	I0422 11:44:55.353129   46587 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0422 11:44:55.353138   46587 command_runner.go:130] > # log_to_journald = false
	I0422 11:44:55.353150   46587 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0422 11:44:55.353159   46587 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0422 11:44:55.353170   46587 command_runner.go:130] > # Path to directory for container attach sockets.
	I0422 11:44:55.353180   46587 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0422 11:44:55.353191   46587 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0422 11:44:55.353199   46587 command_runner.go:130] > # bind_mount_prefix = ""
	I0422 11:44:55.353211   46587 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0422 11:44:55.353219   46587 command_runner.go:130] > # read_only = false
	I0422 11:44:55.353232   46587 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0422 11:44:55.353245   46587 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0422 11:44:55.353254   46587 command_runner.go:130] > # live configuration reload.
	I0422 11:44:55.353263   46587 command_runner.go:130] > # log_level = "info"
	I0422 11:44:55.353274   46587 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0422 11:44:55.353284   46587 command_runner.go:130] > # This option supports live configuration reload.
	I0422 11:44:55.353293   46587 command_runner.go:130] > # log_filter = ""
	I0422 11:44:55.353305   46587 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0422 11:44:55.353317   46587 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0422 11:44:55.353326   46587 command_runner.go:130] > # separated by comma.
	I0422 11:44:55.353333   46587 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0422 11:44:55.353340   46587 command_runner.go:130] > # uid_mappings = ""
	I0422 11:44:55.353346   46587 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0422 11:44:55.353355   46587 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0422 11:44:55.353361   46587 command_runner.go:130] > # separated by comma.
	I0422 11:44:55.353368   46587 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0422 11:44:55.353374   46587 command_runner.go:130] > # gid_mappings = ""
	I0422 11:44:55.353380   46587 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0422 11:44:55.353389   46587 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0422 11:44:55.353402   46587 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0422 11:44:55.353412   46587 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0422 11:44:55.353418   46587 command_runner.go:130] > # minimum_mappable_uid = -1
	I0422 11:44:55.353429   46587 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0422 11:44:55.353437   46587 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0422 11:44:55.353443   46587 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0422 11:44:55.353452   46587 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0422 11:44:55.353456   46587 command_runner.go:130] > # minimum_mappable_gid = -1
	I0422 11:44:55.353462   46587 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0422 11:44:55.353471   46587 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0422 11:44:55.353483   46587 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0422 11:44:55.353492   46587 command_runner.go:130] > # ctr_stop_timeout = 30
	I0422 11:44:55.353501   46587 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0422 11:44:55.353513   46587 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0422 11:44:55.353522   46587 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0422 11:44:55.353526   46587 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0422 11:44:55.353530   46587 command_runner.go:130] > drop_infra_ctr = false
	I0422 11:44:55.353542   46587 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0422 11:44:55.353553   46587 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0422 11:44:55.353567   46587 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0422 11:44:55.353576   46587 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0422 11:44:55.353589   46587 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0422 11:44:55.353602   46587 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0422 11:44:55.353613   46587 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0422 11:44:55.353622   46587 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0422 11:44:55.353631   46587 command_runner.go:130] > # shared_cpuset = ""
	I0422 11:44:55.353644   46587 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0422 11:44:55.353654   46587 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0422 11:44:55.353663   46587 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0422 11:44:55.353677   46587 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0422 11:44:55.353687   46587 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0422 11:44:55.353699   46587 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0422 11:44:55.353712   46587 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0422 11:44:55.353721   46587 command_runner.go:130] > # enable_criu_support = false
	I0422 11:44:55.353732   46587 command_runner.go:130] > # Enable/disable the generation of the container,
	I0422 11:44:55.353744   46587 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0422 11:44:55.353758   46587 command_runner.go:130] > # enable_pod_events = false
	I0422 11:44:55.353772   46587 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0422 11:44:55.353784   46587 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0422 11:44:55.353794   46587 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0422 11:44:55.353802   46587 command_runner.go:130] > # default_runtime = "runc"
	I0422 11:44:55.353807   46587 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0422 11:44:55.353817   46587 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0422 11:44:55.353827   46587 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0422 11:44:55.353834   46587 command_runner.go:130] > # creation as a file is not desired either.
	I0422 11:44:55.353845   46587 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0422 11:44:55.353852   46587 command_runner.go:130] > # the hostname is being managed dynamically.
	I0422 11:44:55.353857   46587 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0422 11:44:55.353862   46587 command_runner.go:130] > # ]
	I0422 11:44:55.353868   46587 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0422 11:44:55.353876   46587 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0422 11:44:55.353884   46587 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0422 11:44:55.353889   46587 command_runner.go:130] > # Each entry in the table should follow the format:
	I0422 11:44:55.353894   46587 command_runner.go:130] > #
	I0422 11:44:55.353899   46587 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0422 11:44:55.353907   46587 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0422 11:44:55.353929   46587 command_runner.go:130] > # runtime_type = "oci"
	I0422 11:44:55.353936   46587 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0422 11:44:55.353940   46587 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0422 11:44:55.353946   46587 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0422 11:44:55.353951   46587 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0422 11:44:55.353957   46587 command_runner.go:130] > # monitor_env = []
	I0422 11:44:55.353962   46587 command_runner.go:130] > # privileged_without_host_devices = false
	I0422 11:44:55.353968   46587 command_runner.go:130] > # allowed_annotations = []
	I0422 11:44:55.353975   46587 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0422 11:44:55.353981   46587 command_runner.go:130] > # Where:
	I0422 11:44:55.353986   46587 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0422 11:44:55.353994   46587 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0422 11:44:55.354002   46587 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0422 11:44:55.354011   46587 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0422 11:44:55.354016   46587 command_runner.go:130] > #   in $PATH.
	I0422 11:44:55.354022   46587 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0422 11:44:55.354029   46587 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0422 11:44:55.354037   46587 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0422 11:44:55.354043   46587 command_runner.go:130] > #   state.
	I0422 11:44:55.354050   46587 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0422 11:44:55.354057   46587 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0422 11:44:55.354063   46587 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0422 11:44:55.354070   46587 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0422 11:44:55.354076   46587 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0422 11:44:55.354084   46587 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0422 11:44:55.354091   46587 command_runner.go:130] > #   The currently recognized values are:
	I0422 11:44:55.354099   46587 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0422 11:44:55.354107   46587 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0422 11:44:55.354115   46587 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0422 11:44:55.354121   46587 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0422 11:44:55.354131   46587 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0422 11:44:55.354139   46587 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0422 11:44:55.354147   46587 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0422 11:44:55.354157   46587 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0422 11:44:55.354165   46587 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0422 11:44:55.354172   46587 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0422 11:44:55.354177   46587 command_runner.go:130] > #   deprecated option "conmon".
	I0422 11:44:55.354186   46587 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0422 11:44:55.354192   46587 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0422 11:44:55.354201   46587 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0422 11:44:55.354208   46587 command_runner.go:130] > #   should be moved to the container's cgroup
	I0422 11:44:55.354215   46587 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0422 11:44:55.354222   46587 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0422 11:44:55.354229   46587 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0422 11:44:55.354237   46587 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0422 11:44:55.354242   46587 command_runner.go:130] > #
	I0422 11:44:55.354246   46587 command_runner.go:130] > # Using the seccomp notifier feature:
	I0422 11:44:55.354252   46587 command_runner.go:130] > #
	I0422 11:44:55.354257   46587 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0422 11:44:55.354265   46587 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0422 11:44:55.354271   46587 command_runner.go:130] > #
	I0422 11:44:55.354279   46587 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0422 11:44:55.354287   46587 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0422 11:44:55.354293   46587 command_runner.go:130] > #
	I0422 11:44:55.354299   46587 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0422 11:44:55.354304   46587 command_runner.go:130] > # feature.
	I0422 11:44:55.354307   46587 command_runner.go:130] > #
	I0422 11:44:55.354312   46587 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0422 11:44:55.354320   46587 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0422 11:44:55.354326   46587 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0422 11:44:55.354335   46587 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0422 11:44:55.354341   46587 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0422 11:44:55.354346   46587 command_runner.go:130] > #
	I0422 11:44:55.354352   46587 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0422 11:44:55.354360   46587 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0422 11:44:55.354363   46587 command_runner.go:130] > #
	I0422 11:44:55.354370   46587 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0422 11:44:55.354377   46587 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0422 11:44:55.354383   46587 command_runner.go:130] > #
	I0422 11:44:55.354390   46587 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0422 11:44:55.354397   46587 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0422 11:44:55.354401   46587 command_runner.go:130] > # limitation.
	I0422 11:44:55.354406   46587 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0422 11:44:55.354410   46587 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0422 11:44:55.354416   46587 command_runner.go:130] > runtime_type = "oci"
	I0422 11:44:55.354420   46587 command_runner.go:130] > runtime_root = "/run/runc"
	I0422 11:44:55.354429   46587 command_runner.go:130] > runtime_config_path = ""
	I0422 11:44:55.354434   46587 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0422 11:44:55.354441   46587 command_runner.go:130] > monitor_cgroup = "pod"
	I0422 11:44:55.354445   46587 command_runner.go:130] > monitor_exec_cgroup = ""
	I0422 11:44:55.354452   46587 command_runner.go:130] > monitor_env = [
	I0422 11:44:55.354457   46587 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0422 11:44:55.354462   46587 command_runner.go:130] > ]
	I0422 11:44:55.354467   46587 command_runner.go:130] > privileged_without_host_devices = false
	I0422 11:44:55.354476   46587 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0422 11:44:55.354483   46587 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0422 11:44:55.354488   46587 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0422 11:44:55.354498   46587 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0422 11:44:55.354507   46587 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0422 11:44:55.354515   46587 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0422 11:44:55.354528   46587 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0422 11:44:55.354538   46587 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0422 11:44:55.354546   46587 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0422 11:44:55.354554   46587 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0422 11:44:55.354560   46587 command_runner.go:130] > # Example:
	I0422 11:44:55.354564   46587 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0422 11:44:55.354571   46587 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0422 11:44:55.354576   46587 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0422 11:44:55.354581   46587 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0422 11:44:55.354587   46587 command_runner.go:130] > # cpuset = 0
	I0422 11:44:55.354591   46587 command_runner.go:130] > # cpushares = "0-1"
	I0422 11:44:55.354596   46587 command_runner.go:130] > # Where:
	I0422 11:44:55.354600   46587 command_runner.go:130] > # The workload name is workload-type.
	I0422 11:44:55.354609   46587 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0422 11:44:55.354616   46587 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0422 11:44:55.354624   46587 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0422 11:44:55.354631   46587 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0422 11:44:55.354639   46587 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0422 11:44:55.354646   46587 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0422 11:44:55.354655   46587 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0422 11:44:55.354665   46587 command_runner.go:130] > # Default value is set to true
	I0422 11:44:55.354675   46587 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0422 11:44:55.354686   46587 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0422 11:44:55.354698   46587 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0422 11:44:55.354709   46587 command_runner.go:130] > # Default value is set to 'false'
	I0422 11:44:55.354720   46587 command_runner.go:130] > # disable_hostport_mapping = false
	I0422 11:44:55.354732   46587 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0422 11:44:55.354740   46587 command_runner.go:130] > #
	I0422 11:44:55.354751   46587 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0422 11:44:55.354762   46587 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0422 11:44:55.354774   46587 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0422 11:44:55.354787   46587 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0422 11:44:55.354796   46587 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0422 11:44:55.354800   46587 command_runner.go:130] > [crio.image]
	I0422 11:44:55.354809   46587 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0422 11:44:55.354816   46587 command_runner.go:130] > # default_transport = "docker://"
	I0422 11:44:55.354829   46587 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0422 11:44:55.354837   46587 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0422 11:44:55.354841   46587 command_runner.go:130] > # global_auth_file = ""
	I0422 11:44:55.354846   46587 command_runner.go:130] > # The image used to instantiate infra containers.
	I0422 11:44:55.354850   46587 command_runner.go:130] > # This option supports live configuration reload.
	I0422 11:44:55.354857   46587 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0422 11:44:55.354863   46587 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0422 11:44:55.354868   46587 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0422 11:44:55.354873   46587 command_runner.go:130] > # This option supports live configuration reload.
	I0422 11:44:55.354877   46587 command_runner.go:130] > # pause_image_auth_file = ""
	I0422 11:44:55.354882   46587 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0422 11:44:55.354887   46587 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0422 11:44:55.354893   46587 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0422 11:44:55.354898   46587 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0422 11:44:55.354902   46587 command_runner.go:130] > # pause_command = "/pause"
	I0422 11:44:55.354907   46587 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0422 11:44:55.354912   46587 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0422 11:44:55.354918   46587 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0422 11:44:55.354923   46587 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0422 11:44:55.354929   46587 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0422 11:44:55.354934   46587 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0422 11:44:55.354938   46587 command_runner.go:130] > # pinned_images = [
	I0422 11:44:55.354941   46587 command_runner.go:130] > # ]
	I0422 11:44:55.354946   46587 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0422 11:44:55.354953   46587 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0422 11:44:55.354958   46587 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0422 11:44:55.354964   46587 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0422 11:44:55.354972   46587 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0422 11:44:55.354976   46587 command_runner.go:130] > # signature_policy = ""
	I0422 11:44:55.354984   46587 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0422 11:44:55.354990   46587 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0422 11:44:55.354998   46587 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0422 11:44:55.355005   46587 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0422 11:44:55.355012   46587 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0422 11:44:55.355019   46587 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0422 11:44:55.355025   46587 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0422 11:44:55.355034   46587 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0422 11:44:55.355041   46587 command_runner.go:130] > # changing them here.
	I0422 11:44:55.355045   46587 command_runner.go:130] > # insecure_registries = [
	I0422 11:44:55.355050   46587 command_runner.go:130] > # ]
	I0422 11:44:55.355056   46587 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0422 11:44:55.355064   46587 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0422 11:44:55.355070   46587 command_runner.go:130] > # image_volumes = "mkdir"
	I0422 11:44:55.355076   46587 command_runner.go:130] > # Temporary directory to use for storing big files
	I0422 11:44:55.355082   46587 command_runner.go:130] > # big_files_temporary_dir = ""
	I0422 11:44:55.355088   46587 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0422 11:44:55.355093   46587 command_runner.go:130] > # CNI plugins.
	I0422 11:44:55.355098   46587 command_runner.go:130] > [crio.network]
	I0422 11:44:55.355105   46587 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0422 11:44:55.355110   46587 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0422 11:44:55.355117   46587 command_runner.go:130] > # cni_default_network = ""
	I0422 11:44:55.355122   46587 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0422 11:44:55.355128   46587 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0422 11:44:55.355133   46587 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0422 11:44:55.355139   46587 command_runner.go:130] > # plugin_dirs = [
	I0422 11:44:55.355143   46587 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0422 11:44:55.355149   46587 command_runner.go:130] > # ]
	I0422 11:44:55.355155   46587 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0422 11:44:55.355161   46587 command_runner.go:130] > [crio.metrics]
	I0422 11:44:55.355166   46587 command_runner.go:130] > # Globally enable or disable metrics support.
	I0422 11:44:55.355173   46587 command_runner.go:130] > enable_metrics = true
	I0422 11:44:55.355177   46587 command_runner.go:130] > # Specify enabled metrics collectors.
	I0422 11:44:55.355184   46587 command_runner.go:130] > # Per default all metrics are enabled.
	I0422 11:44:55.355189   46587 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0422 11:44:55.355198   46587 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0422 11:44:55.355205   46587 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0422 11:44:55.355210   46587 command_runner.go:130] > # metrics_collectors = [
	I0422 11:44:55.355214   46587 command_runner.go:130] > # 	"operations",
	I0422 11:44:55.355221   46587 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0422 11:44:55.355225   46587 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0422 11:44:55.355231   46587 command_runner.go:130] > # 	"operations_errors",
	I0422 11:44:55.355235   46587 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0422 11:44:55.355242   46587 command_runner.go:130] > # 	"image_pulls_by_name",
	I0422 11:44:55.355247   46587 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0422 11:44:55.355251   46587 command_runner.go:130] > # 	"image_pulls_failures",
	I0422 11:44:55.355255   46587 command_runner.go:130] > # 	"image_pulls_successes",
	I0422 11:44:55.355259   46587 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0422 11:44:55.355263   46587 command_runner.go:130] > # 	"image_layer_reuse",
	I0422 11:44:55.355270   46587 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0422 11:44:55.355276   46587 command_runner.go:130] > # 	"containers_oom_total",
	I0422 11:44:55.355283   46587 command_runner.go:130] > # 	"containers_oom",
	I0422 11:44:55.355287   46587 command_runner.go:130] > # 	"processes_defunct",
	I0422 11:44:55.355293   46587 command_runner.go:130] > # 	"operations_total",
	I0422 11:44:55.355297   46587 command_runner.go:130] > # 	"operations_latency_seconds",
	I0422 11:44:55.355304   46587 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0422 11:44:55.355308   46587 command_runner.go:130] > # 	"operations_errors_total",
	I0422 11:44:55.355315   46587 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0422 11:44:55.355319   46587 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0422 11:44:55.355325   46587 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0422 11:44:55.355329   46587 command_runner.go:130] > # 	"image_pulls_success_total",
	I0422 11:44:55.355333   46587 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0422 11:44:55.355337   46587 command_runner.go:130] > # 	"containers_oom_count_total",
	I0422 11:44:55.355344   46587 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0422 11:44:55.355348   46587 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0422 11:44:55.355354   46587 command_runner.go:130] > # ]
	I0422 11:44:55.355358   46587 command_runner.go:130] > # The port on which the metrics server will listen.
	I0422 11:44:55.355365   46587 command_runner.go:130] > # metrics_port = 9090
	I0422 11:44:55.355370   46587 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0422 11:44:55.355376   46587 command_runner.go:130] > # metrics_socket = ""
	I0422 11:44:55.355380   46587 command_runner.go:130] > # The certificate for the secure metrics server.
	I0422 11:44:55.355388   46587 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0422 11:44:55.355395   46587 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0422 11:44:55.355401   46587 command_runner.go:130] > # certificate on any modification event.
	I0422 11:44:55.355405   46587 command_runner.go:130] > # metrics_cert = ""
	I0422 11:44:55.355410   46587 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0422 11:44:55.355417   46587 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0422 11:44:55.355421   46587 command_runner.go:130] > # metrics_key = ""
	I0422 11:44:55.355433   46587 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0422 11:44:55.355439   46587 command_runner.go:130] > [crio.tracing]
	I0422 11:44:55.355444   46587 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0422 11:44:55.355451   46587 command_runner.go:130] > # enable_tracing = false
	I0422 11:44:55.355456   46587 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0422 11:44:55.355463   46587 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0422 11:44:55.355469   46587 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0422 11:44:55.355476   46587 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0422 11:44:55.355480   46587 command_runner.go:130] > # CRI-O NRI configuration.
	I0422 11:44:55.355484   46587 command_runner.go:130] > [crio.nri]
	I0422 11:44:55.355488   46587 command_runner.go:130] > # Globally enable or disable NRI.
	I0422 11:44:55.355492   46587 command_runner.go:130] > # enable_nri = false
	I0422 11:44:55.355496   46587 command_runner.go:130] > # NRI socket to listen on.
	I0422 11:44:55.355503   46587 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0422 11:44:55.355507   46587 command_runner.go:130] > # NRI plugin directory to use.
	I0422 11:44:55.355513   46587 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0422 11:44:55.355518   46587 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0422 11:44:55.355527   46587 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0422 11:44:55.355534   46587 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0422 11:44:55.355538   46587 command_runner.go:130] > # nri_disable_connections = false
	I0422 11:44:55.355545   46587 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0422 11:44:55.355549   46587 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0422 11:44:55.355554   46587 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0422 11:44:55.355561   46587 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0422 11:44:55.355567   46587 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0422 11:44:55.355571   46587 command_runner.go:130] > [crio.stats]
	I0422 11:44:55.355579   46587 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0422 11:44:55.355584   46587 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0422 11:44:55.355590   46587 command_runner.go:130] > # stats_collection_period = 0
	I0422 11:44:55.355611   46587 command_runner.go:130] ! time="2024-04-22 11:44:55.323156157Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0422 11:44:55.355624   46587 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0422 11:44:55.355749   46587 cni.go:84] Creating CNI manager for ""
	I0422 11:44:55.355765   46587 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0422 11:44:55.355775   46587 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 11:44:55.355794   46587 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-254635 NodeName:multinode-254635 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 11:44:55.355917   46587 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-254635"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.185
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 11:44:55.355973   46587 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 11:44:55.367279   46587 command_runner.go:130] > kubeadm
	I0422 11:44:55.367298   46587 command_runner.go:130] > kubectl
	I0422 11:44:55.367304   46587 command_runner.go:130] > kubelet
	I0422 11:44:55.367327   46587 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 11:44:55.367381   46587 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 11:44:55.377316   46587 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0422 11:44:55.397162   46587 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 11:44:55.415642   46587 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0422 11:44:55.435575   46587 ssh_runner.go:195] Run: grep 192.168.39.185	control-plane.minikube.internal$ /etc/hosts
	I0422 11:44:55.440222   46587 command_runner.go:130] > 192.168.39.185	control-plane.minikube.internal
	I0422 11:44:55.440383   46587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 11:44:55.588534   46587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 11:44:55.606404   46587 certs.go:68] Setting up /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/multinode-254635 for IP: 192.168.39.185
	I0422 11:44:55.606425   46587 certs.go:194] generating shared ca certs ...
	I0422 11:44:55.606446   46587 certs.go:226] acquiring lock for ca certs: {Name:mk0b77082b88c771d0b00be5267ca31dfee6f85a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:44:55.606589   46587 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key
	I0422 11:44:55.606648   46587 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key
	I0422 11:44:55.606661   46587 certs.go:256] generating profile certs ...
	I0422 11:44:55.606748   46587 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/multinode-254635/client.key
	I0422 11:44:55.606833   46587 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/multinode-254635/apiserver.key.8cc66a77
	I0422 11:44:55.606885   46587 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/multinode-254635/proxy-client.key
	I0422 11:44:55.606902   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0422 11:44:55.606924   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0422 11:44:55.606943   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0422 11:44:55.606958   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0422 11:44:55.606976   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/multinode-254635/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0422 11:44:55.606993   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/multinode-254635/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0422 11:44:55.607012   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/multinode-254635/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0422 11:44:55.607028   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/multinode-254635/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0422 11:44:55.607098   46587 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem (1338 bytes)
	W0422 11:44:55.607137   46587 certs.go:480] ignoring /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945_empty.pem, impossibly tiny 0 bytes
	I0422 11:44:55.607151   46587 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem (1679 bytes)
	I0422 11:44:55.607185   46587 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem (1078 bytes)
	I0422 11:44:55.607232   46587 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem (1123 bytes)
	I0422 11:44:55.607273   46587 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem (1679 bytes)
	I0422 11:44:55.607328   46587 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem (1708 bytes)
	I0422 11:44:55.607366   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:44:55.607386   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem -> /usr/share/ca-certificates/14945.pem
	I0422 11:44:55.607406   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> /usr/share/ca-certificates/149452.pem
	I0422 11:44:55.607954   46587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 11:44:55.633711   46587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 11:44:55.661079   46587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 11:44:55.687892   46587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0422 11:44:55.715749   46587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/multinode-254635/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0422 11:44:55.743775   46587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/multinode-254635/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 11:44:55.771803   46587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/multinode-254635/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 11:44:55.799297   46587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/multinode-254635/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 11:44:55.826745   46587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 11:44:55.854629   46587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem --> /usr/share/ca-certificates/14945.pem (1338 bytes)
	I0422 11:44:55.882652   46587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem --> /usr/share/ca-certificates/149452.pem (1708 bytes)
	I0422 11:44:55.910047   46587 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 11:44:55.930007   46587 ssh_runner.go:195] Run: openssl version
	I0422 11:44:55.937116   46587 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0422 11:44:55.937198   46587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 11:44:55.949184   46587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:44:55.954172   46587 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 22 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:44:55.954373   46587 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:44:55.954429   46587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:44:55.960525   46587 command_runner.go:130] > b5213941
	I0422 11:44:55.960608   46587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 11:44:55.970796   46587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14945.pem && ln -fs /usr/share/ca-certificates/14945.pem /etc/ssl/certs/14945.pem"
	I0422 11:44:55.982452   46587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14945.pem
	I0422 11:44:55.987389   46587 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 22 10:51 /usr/share/ca-certificates/14945.pem
	I0422 11:44:55.987482   46587 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 10:51 /usr/share/ca-certificates/14945.pem
	I0422 11:44:55.987533   46587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14945.pem
	I0422 11:44:55.993542   46587 command_runner.go:130] > 51391683
	I0422 11:44:55.993799   46587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14945.pem /etc/ssl/certs/51391683.0"
	I0422 11:44:56.003701   46587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149452.pem && ln -fs /usr/share/ca-certificates/149452.pem /etc/ssl/certs/149452.pem"
	I0422 11:44:56.015412   46587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149452.pem
	I0422 11:44:56.020345   46587 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 22 10:51 /usr/share/ca-certificates/149452.pem
	I0422 11:44:56.020464   46587 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 10:51 /usr/share/ca-certificates/149452.pem
	I0422 11:44:56.020515   46587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149452.pem
	I0422 11:44:56.027065   46587 command_runner.go:130] > 3ec20f2e
	I0422 11:44:56.027217   46587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149452.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 11:44:56.037205   46587 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 11:44:56.042671   46587 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 11:44:56.042699   46587 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0422 11:44:56.042723   46587 command_runner.go:130] > Device: 253,1	Inode: 6292502     Links: 1
	I0422 11:44:56.042739   46587 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0422 11:44:56.042749   46587 command_runner.go:130] > Access: 2024-04-22 11:38:03.510650914 +0000
	I0422 11:44:56.042757   46587 command_runner.go:130] > Modify: 2024-04-22 11:38:03.510650914 +0000
	I0422 11:44:56.042769   46587 command_runner.go:130] > Change: 2024-04-22 11:38:03.510650914 +0000
	I0422 11:44:56.042777   46587 command_runner.go:130] >  Birth: 2024-04-22 11:38:03.510650914 +0000
	I0422 11:44:56.042845   46587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 11:44:56.049708   46587 command_runner.go:130] > Certificate will not expire
	I0422 11:44:56.049918   46587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 11:44:56.056172   46587 command_runner.go:130] > Certificate will not expire
	I0422 11:44:56.056237   46587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 11:44:56.062119   46587 command_runner.go:130] > Certificate will not expire
	I0422 11:44:56.062320   46587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 11:44:56.068483   46587 command_runner.go:130] > Certificate will not expire
	I0422 11:44:56.068536   46587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 11:44:56.074648   46587 command_runner.go:130] > Certificate will not expire
	I0422 11:44:56.074762   46587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 11:44:56.080612   46587 command_runner.go:130] > Certificate will not expire
	I0422 11:44:56.080984   46587 kubeadm.go:391] StartCluster: {Name:multinode-254635 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-254635
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.75 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:f
alse istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 11:44:56.081129   46587 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 11:44:56.081186   46587 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 11:44:56.120475   46587 command_runner.go:130] > 11c87675d112df1f4e12a819757b733862cca0a7eccb55f1d72d483e254ce402
	I0422 11:44:56.120503   46587 command_runner.go:130] > c5ac398b3838a4544e429afdc4ec699c532240a711df54d0ff54f626894fd3c3
	I0422 11:44:56.120512   46587 command_runner.go:130] > 70ea62bce313978f143670020dbcaed41edb5279e812840d18fa210fbf68433d
	I0422 11:44:56.120521   46587 command_runner.go:130] > 8bd2ac5a2adfeb536099f59cf363bbbde81f2e3983e1d6c18a1f6565651e8ed9
	I0422 11:44:56.120529   46587 command_runner.go:130] > 07a0b4812dd3b1c6a7f0c82617d26c0ccc45b8b8e1d30d6c318f8bda12735f0b
	I0422 11:44:56.120538   46587 command_runner.go:130] > d3b8493457784233fb659b95632fa92367fa72fc86b760ee436e0ad6468bd664
	I0422 11:44:56.120550   46587 command_runner.go:130] > d66e29130d9c973c5174eff5a88cb844d52b9fa38ad6333085bb66c3bd155697
	I0422 11:44:56.120565   46587 command_runner.go:130] > 7c0d3bf49be403d6755298c16ec74a2883dc6b3e3c8efde6968515c7bc280b9c
	I0422 11:44:56.120589   46587 cri.go:89] found id: "11c87675d112df1f4e12a819757b733862cca0a7eccb55f1d72d483e254ce402"
	I0422 11:44:56.120599   46587 cri.go:89] found id: "c5ac398b3838a4544e429afdc4ec699c532240a711df54d0ff54f626894fd3c3"
	I0422 11:44:56.120602   46587 cri.go:89] found id: "70ea62bce313978f143670020dbcaed41edb5279e812840d18fa210fbf68433d"
	I0422 11:44:56.120605   46587 cri.go:89] found id: "8bd2ac5a2adfeb536099f59cf363bbbde81f2e3983e1d6c18a1f6565651e8ed9"
	I0422 11:44:56.120613   46587 cri.go:89] found id: "07a0b4812dd3b1c6a7f0c82617d26c0ccc45b8b8e1d30d6c318f8bda12735f0b"
	I0422 11:44:56.120616   46587 cri.go:89] found id: "d3b8493457784233fb659b95632fa92367fa72fc86b760ee436e0ad6468bd664"
	I0422 11:44:56.120619   46587 cri.go:89] found id: "d66e29130d9c973c5174eff5a88cb844d52b9fa38ad6333085bb66c3bd155697"
	I0422 11:44:56.120621   46587 cri.go:89] found id: "7c0d3bf49be403d6755298c16ec74a2883dc6b3e3c8efde6968515c7bc280b9c"
	I0422 11:44:56.120624   46587 cri.go:89] found id: ""
	I0422 11:44:56.120662   46587 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 22 11:46:28 multinode-254635 crio[2862]: time="2024-04-22 11:46:28.109763404Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713786388109678282,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d173c4f9-0966-445c-b7c7-0e0e15e9be0a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:46:28 multinode-254635 crio[2862]: time="2024-04-22 11:46:28.110265360Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9d0406b2-987b-45d4-8c1d-7abc191de568 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:46:28 multinode-254635 crio[2862]: time="2024-04-22 11:46:28.110977398Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9d0406b2-987b-45d4-8c1d-7abc191de568 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:46:28 multinode-254635 crio[2862]: time="2024-04-22 11:46:28.111398966Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4dd7f77a62427cc79a55da96472d737fa62edd2e32f2a59f9eeb95d7e3cee8b4,PodSandboxId:a4f9db39efb9d57360a91d996f7d4fca5f95b3b4b4a62ee2d15ff003863d5196,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713786336130816510,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w6wst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec3be7d9-b316-43ba-8c05-c028f530c07e,},Annotations:map[string]string{io.kubernetes.container.hash: d354c3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:170cc5dfa96c9013a5da4901c9998afe1e59779fda2e4d36d4697b12c1e7dc34,PodSandboxId:98087e18a7cbb0bd82bb75f40cd5ba1782fa267c12b76ca45bf0562e5a25ef38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713786302535530877,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jzhvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 848b349d-906a-411c-a60b-b559d47ad2a7,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4ed2d1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aee9691b01261c2c6b2edb4a38d63b27aa00f3c67567e1829e719016c997dc4,PodSandboxId:f9a67731119452cc2f7e5efd38ec79284dc18238337d6f5aabf1304b57ab67b6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713786302438370128,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-858b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 457a81ab-ca6c-4757-92b1-734ba151216f,},Annotations:map[string]string{io.kubernetes.container.hash: 648abffd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0479f88c8f22fadfd7ca5c88a541baead1e410717b9d83c5f6e8c9c81026cd90,PodSandboxId:a042cf023bf4e218fa5f8b26e1a1b677b8163df69466c1d490db5763d8a85265,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713786302404859701,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mr7rq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a91e327-1478-4e50-9993-de3d5406efaa,},Annotations:map[string]s
tring{io.kubernetes.container.hash: b4fca94f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9048b9918b26e868f7f5bb8d1b1b1f3370ac6cdc2da10608141d9b14c76858e3,PodSandboxId:1ab69924cd55ac55cfdb620fa2405404172a58fbf0cf79574173e4aa7793996e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713786302415868649,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82216f3b-f366-4b55-893a-8f7c1b59372b,},Annotations:map[string]string{io.kub
ernetes.container.hash: 42acbef6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cab8f0adadfda9e02f9436c2cee58b7ec4d68640fc4514df80155477e55ffd7c,PodSandboxId:b376e76e22dd60e25bc96a0bc0f85f1612608a0606260ba99efdc85ce7e34bb8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713786298615575062,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346d24e136b744e11f51aaf0b32cfabc,},Annotations:map[string]string{io.kubernetes.container.hash: ea3273f1,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4867fbb06694830cf22ead69bf0ddd10a883b530a624a5e9b3b78fa115b0bc2,PodSandboxId:c8783ce123c8f90199c1c8c7247f52091596152d1373914113027e71aa5ef328,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713786298590465244,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7dbbfc94d550094389016edf0d994af,},Annotations:map[string]string{io.kubernetes.container.hash: 933c335
1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9631acc3cd8008ea734f6268932041e8e0e08d96b2532faa4be1d1e017eae954,PodSandboxId:fd54210addb9cd6bce92a9348095b27c8b805d8be5d54b8c974fc492fc55dc7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713786298489648163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 739aac7b3eff66515aa3886c2a1e8a1f,},Annotations:map[string]string{io.kubernetes.container.hash: 55596331,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86609797440bf5cb0ebb23673aadaa0c528eea783ee8792fab8e9c928d17a31c,PodSandboxId:af59cf4622968ecd5d9b3cc728998c2bb506c8c387315d19b068a53faf38db31,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713786298509072759,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b4e82c7f0c63c79504c005bee34fab,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d119fbdf20b5c9a472fff8e3b1e684445daab02f1bbdcea33624195a806c4ad,PodSandboxId:a4e2b504f1ee11299039caae119b3822cb2845449805ab51d09c66c02f259520,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713785987940597002,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w6wst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec3be7d9-b316-43ba-8c05-c028f530c07e,},Annotations:map[string]string{io.kubernetes.container.hash: d354c3c,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c87675d112df1f4e12a819757b733862cca0a7eccb55f1d72d483e254ce402,PodSandboxId:68f35a30ef2888e4d8c3443bed165372125c99ec4c90ad00e5788f23829e37a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713785939563513704,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82216f3b-f366-4b55-893a-8f7c1b59372b,},Annotations:map[string]string{io.kubernetes.container.hash: 42acbef6,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5ac398b3838a4544e429afdc4ec699c532240a711df54d0ff54f626894fd3c3,PodSandboxId:424d2c496a3dea7ca547f4fac7ee1fedc8712d6349f046b99e0e60337a89ae4d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713785939558527459,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-858b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 457a81ab-ca6c-4757-92b1-734ba151216f,},Annotations:map[string]string{io.kubernetes.container.hash: 648abffd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70ea62bce313978f143670020dbcaed41edb5279e812840d18fa210fbf68433d,PodSandboxId:f8b84bce701b57e793df270099e01b32c56bdae6c38be83e6b2821890bd56005,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713785908412292124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mr7rq,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3a91e327-1478-4e50-9993-de3d5406efaa,},Annotations:map[string]string{io.kubernetes.container.hash: b4fca94f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd2ac5a2adfeb536099f59cf363bbbde81f2e3983e1d6c18a1f6565651e8ed9,PodSandboxId:5302e172d2439c6f7ab662cf2a92c5a8e3b4fdec4ca08af14bb0900e2b6db4db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713785907875455356,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jzhvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 848b349d-906a-411c-a60b-b
559d47ad2a7,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4ed2d1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07a0b4812dd3b1c6a7f0c82617d26c0ccc45b8b8e1d30d6c318f8bda12735f0b,PodSandboxId:8eee2070b37b5e718fe58a882a7c2dd6170f42ad7a942a9c831405dea835c4b8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713785887893260405,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b4e82c7f0c63c79504c005bee34fab,},A
nnotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3b8493457784233fb659b95632fa92367fa72fc86b760ee436e0ad6468bd664,PodSandboxId:dde06e3757bdcbf4a3df03f5af8d551716129cab83db3c61cd492162196df95f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713785887846208182,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346d24e136b744e11f51aaf0b32cfabc,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: ea3273f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66e29130d9c973c5174eff5a88cb844d52b9fa38ad6333085bb66c3bd155697,PodSandboxId:174bda2a37e43f7a336de9740c6f681c2ffbc130efbd2576cd3af6aa0b68fdb6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713785887816866063,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7dbbfc94d550094389016edf0d994af,},Annotations:map[string]string{io.k
ubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c0d3bf49be403d6755298c16ec74a2883dc6b3e3c8efde6968515c7bc280b9c,PodSandboxId:1cd9c0443939a46f232ffda1c018b1c89230a06ff670eadbe540f5e61af211f2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713785887803677671,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 739aac7b3eff66515aa3886c2a1e8a1f,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 55596331,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9d0406b2-987b-45d4-8c1d-7abc191de568 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:46:28 multinode-254635 crio[2862]: time="2024-04-22 11:46:28.156520281Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=05ece9e5-d88b-49e5-9181-d9d08760002a name=/runtime.v1.RuntimeService/Version
	Apr 22 11:46:28 multinode-254635 crio[2862]: time="2024-04-22 11:46:28.156598971Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=05ece9e5-d88b-49e5-9181-d9d08760002a name=/runtime.v1.RuntimeService/Version
	Apr 22 11:46:28 multinode-254635 crio[2862]: time="2024-04-22 11:46:28.157992829Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d1351a74-db5e-40e7-ac31-5ce4a8d4a405 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:46:28 multinode-254635 crio[2862]: time="2024-04-22 11:46:28.158778721Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713786388158672424,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d1351a74-db5e-40e7-ac31-5ce4a8d4a405 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:46:28 multinode-254635 crio[2862]: time="2024-04-22 11:46:28.159626438Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=202ddce4-037e-4e13-8d41-42bc1c1e5363 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:46:28 multinode-254635 crio[2862]: time="2024-04-22 11:46:28.159943694Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=202ddce4-037e-4e13-8d41-42bc1c1e5363 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:46:28 multinode-254635 crio[2862]: time="2024-04-22 11:46:28.160364914Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4dd7f77a62427cc79a55da96472d737fa62edd2e32f2a59f9eeb95d7e3cee8b4,PodSandboxId:a4f9db39efb9d57360a91d996f7d4fca5f95b3b4b4a62ee2d15ff003863d5196,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713786336130816510,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w6wst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec3be7d9-b316-43ba-8c05-c028f530c07e,},Annotations:map[string]string{io.kubernetes.container.hash: d354c3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:170cc5dfa96c9013a5da4901c9998afe1e59779fda2e4d36d4697b12c1e7dc34,PodSandboxId:98087e18a7cbb0bd82bb75f40cd5ba1782fa267c12b76ca45bf0562e5a25ef38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713786302535530877,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jzhvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 848b349d-906a-411c-a60b-b559d47ad2a7,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4ed2d1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aee9691b01261c2c6b2edb4a38d63b27aa00f3c67567e1829e719016c997dc4,PodSandboxId:f9a67731119452cc2f7e5efd38ec79284dc18238337d6f5aabf1304b57ab67b6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713786302438370128,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-858b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 457a81ab-ca6c-4757-92b1-734ba151216f,},Annotations:map[string]string{io.kubernetes.container.hash: 648abffd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0479f88c8f22fadfd7ca5c88a541baead1e410717b9d83c5f6e8c9c81026cd90,PodSandboxId:a042cf023bf4e218fa5f8b26e1a1b677b8163df69466c1d490db5763d8a85265,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713786302404859701,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mr7rq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a91e327-1478-4e50-9993-de3d5406efaa,},Annotations:map[string]s
tring{io.kubernetes.container.hash: b4fca94f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9048b9918b26e868f7f5bb8d1b1b1f3370ac6cdc2da10608141d9b14c76858e3,PodSandboxId:1ab69924cd55ac55cfdb620fa2405404172a58fbf0cf79574173e4aa7793996e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713786302415868649,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82216f3b-f366-4b55-893a-8f7c1b59372b,},Annotations:map[string]string{io.kub
ernetes.container.hash: 42acbef6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cab8f0adadfda9e02f9436c2cee58b7ec4d68640fc4514df80155477e55ffd7c,PodSandboxId:b376e76e22dd60e25bc96a0bc0f85f1612608a0606260ba99efdc85ce7e34bb8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713786298615575062,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346d24e136b744e11f51aaf0b32cfabc,},Annotations:map[string]string{io.kubernetes.container.hash: ea3273f1,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4867fbb06694830cf22ead69bf0ddd10a883b530a624a5e9b3b78fa115b0bc2,PodSandboxId:c8783ce123c8f90199c1c8c7247f52091596152d1373914113027e71aa5ef328,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713786298590465244,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7dbbfc94d550094389016edf0d994af,},Annotations:map[string]string{io.kubernetes.container.hash: 933c335
1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9631acc3cd8008ea734f6268932041e8e0e08d96b2532faa4be1d1e017eae954,PodSandboxId:fd54210addb9cd6bce92a9348095b27c8b805d8be5d54b8c974fc492fc55dc7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713786298489648163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 739aac7b3eff66515aa3886c2a1e8a1f,},Annotations:map[string]string{io.kubernetes.container.hash: 55596331,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86609797440bf5cb0ebb23673aadaa0c528eea783ee8792fab8e9c928d17a31c,PodSandboxId:af59cf4622968ecd5d9b3cc728998c2bb506c8c387315d19b068a53faf38db31,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713786298509072759,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b4e82c7f0c63c79504c005bee34fab,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d119fbdf20b5c9a472fff8e3b1e684445daab02f1bbdcea33624195a806c4ad,PodSandboxId:a4e2b504f1ee11299039caae119b3822cb2845449805ab51d09c66c02f259520,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713785987940597002,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w6wst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec3be7d9-b316-43ba-8c05-c028f530c07e,},Annotations:map[string]string{io.kubernetes.container.hash: d354c3c,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c87675d112df1f4e12a819757b733862cca0a7eccb55f1d72d483e254ce402,PodSandboxId:68f35a30ef2888e4d8c3443bed165372125c99ec4c90ad00e5788f23829e37a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713785939563513704,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82216f3b-f366-4b55-893a-8f7c1b59372b,},Annotations:map[string]string{io.kubernetes.container.hash: 42acbef6,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5ac398b3838a4544e429afdc4ec699c532240a711df54d0ff54f626894fd3c3,PodSandboxId:424d2c496a3dea7ca547f4fac7ee1fedc8712d6349f046b99e0e60337a89ae4d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713785939558527459,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-858b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 457a81ab-ca6c-4757-92b1-734ba151216f,},Annotations:map[string]string{io.kubernetes.container.hash: 648abffd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70ea62bce313978f143670020dbcaed41edb5279e812840d18fa210fbf68433d,PodSandboxId:f8b84bce701b57e793df270099e01b32c56bdae6c38be83e6b2821890bd56005,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713785908412292124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mr7rq,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3a91e327-1478-4e50-9993-de3d5406efaa,},Annotations:map[string]string{io.kubernetes.container.hash: b4fca94f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd2ac5a2adfeb536099f59cf363bbbde81f2e3983e1d6c18a1f6565651e8ed9,PodSandboxId:5302e172d2439c6f7ab662cf2a92c5a8e3b4fdec4ca08af14bb0900e2b6db4db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713785907875455356,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jzhvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 848b349d-906a-411c-a60b-b
559d47ad2a7,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4ed2d1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07a0b4812dd3b1c6a7f0c82617d26c0ccc45b8b8e1d30d6c318f8bda12735f0b,PodSandboxId:8eee2070b37b5e718fe58a882a7c2dd6170f42ad7a942a9c831405dea835c4b8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713785887893260405,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b4e82c7f0c63c79504c005bee34fab,},A
nnotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3b8493457784233fb659b95632fa92367fa72fc86b760ee436e0ad6468bd664,PodSandboxId:dde06e3757bdcbf4a3df03f5af8d551716129cab83db3c61cd492162196df95f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713785887846208182,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346d24e136b744e11f51aaf0b32cfabc,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: ea3273f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66e29130d9c973c5174eff5a88cb844d52b9fa38ad6333085bb66c3bd155697,PodSandboxId:174bda2a37e43f7a336de9740c6f681c2ffbc130efbd2576cd3af6aa0b68fdb6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713785887816866063,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7dbbfc94d550094389016edf0d994af,},Annotations:map[string]string{io.k
ubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c0d3bf49be403d6755298c16ec74a2883dc6b3e3c8efde6968515c7bc280b9c,PodSandboxId:1cd9c0443939a46f232ffda1c018b1c89230a06ff670eadbe540f5e61af211f2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713785887803677671,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 739aac7b3eff66515aa3886c2a1e8a1f,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 55596331,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=202ddce4-037e-4e13-8d41-42bc1c1e5363 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:46:28 multinode-254635 crio[2862]: time="2024-04-22 11:46:28.203370011Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=139673af-7831-4b2c-8c24-2abbabd299e4 name=/runtime.v1.RuntimeService/Version
	Apr 22 11:46:28 multinode-254635 crio[2862]: time="2024-04-22 11:46:28.203473367Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=139673af-7831-4b2c-8c24-2abbabd299e4 name=/runtime.v1.RuntimeService/Version
	Apr 22 11:46:28 multinode-254635 crio[2862]: time="2024-04-22 11:46:28.204671444Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=36e203e0-62a6-4d6d-8e86-e7d39e06b4e3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:46:28 multinode-254635 crio[2862]: time="2024-04-22 11:46:28.205331927Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713786388205305617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=36e203e0-62a6-4d6d-8e86-e7d39e06b4e3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:46:28 multinode-254635 crio[2862]: time="2024-04-22 11:46:28.206381822Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c1fe61e-ebaa-4f79-b561-0e1c02216fa1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:46:28 multinode-254635 crio[2862]: time="2024-04-22 11:46:28.206437228Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c1fe61e-ebaa-4f79-b561-0e1c02216fa1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:46:28 multinode-254635 crio[2862]: time="2024-04-22 11:46:28.206878721Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4dd7f77a62427cc79a55da96472d737fa62edd2e32f2a59f9eeb95d7e3cee8b4,PodSandboxId:a4f9db39efb9d57360a91d996f7d4fca5f95b3b4b4a62ee2d15ff003863d5196,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713786336130816510,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w6wst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec3be7d9-b316-43ba-8c05-c028f530c07e,},Annotations:map[string]string{io.kubernetes.container.hash: d354c3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:170cc5dfa96c9013a5da4901c9998afe1e59779fda2e4d36d4697b12c1e7dc34,PodSandboxId:98087e18a7cbb0bd82bb75f40cd5ba1782fa267c12b76ca45bf0562e5a25ef38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713786302535530877,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jzhvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 848b349d-906a-411c-a60b-b559d47ad2a7,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4ed2d1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aee9691b01261c2c6b2edb4a38d63b27aa00f3c67567e1829e719016c997dc4,PodSandboxId:f9a67731119452cc2f7e5efd38ec79284dc18238337d6f5aabf1304b57ab67b6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713786302438370128,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-858b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 457a81ab-ca6c-4757-92b1-734ba151216f,},Annotations:map[string]string{io.kubernetes.container.hash: 648abffd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0479f88c8f22fadfd7ca5c88a541baead1e410717b9d83c5f6e8c9c81026cd90,PodSandboxId:a042cf023bf4e218fa5f8b26e1a1b677b8163df69466c1d490db5763d8a85265,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713786302404859701,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mr7rq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a91e327-1478-4e50-9993-de3d5406efaa,},Annotations:map[string]s
tring{io.kubernetes.container.hash: b4fca94f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9048b9918b26e868f7f5bb8d1b1b1f3370ac6cdc2da10608141d9b14c76858e3,PodSandboxId:1ab69924cd55ac55cfdb620fa2405404172a58fbf0cf79574173e4aa7793996e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713786302415868649,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82216f3b-f366-4b55-893a-8f7c1b59372b,},Annotations:map[string]string{io.kub
ernetes.container.hash: 42acbef6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cab8f0adadfda9e02f9436c2cee58b7ec4d68640fc4514df80155477e55ffd7c,PodSandboxId:b376e76e22dd60e25bc96a0bc0f85f1612608a0606260ba99efdc85ce7e34bb8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713786298615575062,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346d24e136b744e11f51aaf0b32cfabc,},Annotations:map[string]string{io.kubernetes.container.hash: ea3273f1,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4867fbb06694830cf22ead69bf0ddd10a883b530a624a5e9b3b78fa115b0bc2,PodSandboxId:c8783ce123c8f90199c1c8c7247f52091596152d1373914113027e71aa5ef328,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713786298590465244,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7dbbfc94d550094389016edf0d994af,},Annotations:map[string]string{io.kubernetes.container.hash: 933c335
1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9631acc3cd8008ea734f6268932041e8e0e08d96b2532faa4be1d1e017eae954,PodSandboxId:fd54210addb9cd6bce92a9348095b27c8b805d8be5d54b8c974fc492fc55dc7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713786298489648163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 739aac7b3eff66515aa3886c2a1e8a1f,},Annotations:map[string]string{io.kubernetes.container.hash: 55596331,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86609797440bf5cb0ebb23673aadaa0c528eea783ee8792fab8e9c928d17a31c,PodSandboxId:af59cf4622968ecd5d9b3cc728998c2bb506c8c387315d19b068a53faf38db31,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713786298509072759,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b4e82c7f0c63c79504c005bee34fab,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d119fbdf20b5c9a472fff8e3b1e684445daab02f1bbdcea33624195a806c4ad,PodSandboxId:a4e2b504f1ee11299039caae119b3822cb2845449805ab51d09c66c02f259520,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713785987940597002,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w6wst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec3be7d9-b316-43ba-8c05-c028f530c07e,},Annotations:map[string]string{io.kubernetes.container.hash: d354c3c,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c87675d112df1f4e12a819757b733862cca0a7eccb55f1d72d483e254ce402,PodSandboxId:68f35a30ef2888e4d8c3443bed165372125c99ec4c90ad00e5788f23829e37a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713785939563513704,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82216f3b-f366-4b55-893a-8f7c1b59372b,},Annotations:map[string]string{io.kubernetes.container.hash: 42acbef6,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5ac398b3838a4544e429afdc4ec699c532240a711df54d0ff54f626894fd3c3,PodSandboxId:424d2c496a3dea7ca547f4fac7ee1fedc8712d6349f046b99e0e60337a89ae4d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713785939558527459,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-858b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 457a81ab-ca6c-4757-92b1-734ba151216f,},Annotations:map[string]string{io.kubernetes.container.hash: 648abffd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70ea62bce313978f143670020dbcaed41edb5279e812840d18fa210fbf68433d,PodSandboxId:f8b84bce701b57e793df270099e01b32c56bdae6c38be83e6b2821890bd56005,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713785908412292124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mr7rq,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3a91e327-1478-4e50-9993-de3d5406efaa,},Annotations:map[string]string{io.kubernetes.container.hash: b4fca94f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd2ac5a2adfeb536099f59cf363bbbde81f2e3983e1d6c18a1f6565651e8ed9,PodSandboxId:5302e172d2439c6f7ab662cf2a92c5a8e3b4fdec4ca08af14bb0900e2b6db4db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713785907875455356,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jzhvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 848b349d-906a-411c-a60b-b
559d47ad2a7,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4ed2d1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07a0b4812dd3b1c6a7f0c82617d26c0ccc45b8b8e1d30d6c318f8bda12735f0b,PodSandboxId:8eee2070b37b5e718fe58a882a7c2dd6170f42ad7a942a9c831405dea835c4b8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713785887893260405,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b4e82c7f0c63c79504c005bee34fab,},A
nnotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3b8493457784233fb659b95632fa92367fa72fc86b760ee436e0ad6468bd664,PodSandboxId:dde06e3757bdcbf4a3df03f5af8d551716129cab83db3c61cd492162196df95f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713785887846208182,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346d24e136b744e11f51aaf0b32cfabc,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: ea3273f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66e29130d9c973c5174eff5a88cb844d52b9fa38ad6333085bb66c3bd155697,PodSandboxId:174bda2a37e43f7a336de9740c6f681c2ffbc130efbd2576cd3af6aa0b68fdb6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713785887816866063,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7dbbfc94d550094389016edf0d994af,},Annotations:map[string]string{io.k
ubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c0d3bf49be403d6755298c16ec74a2883dc6b3e3c8efde6968515c7bc280b9c,PodSandboxId:1cd9c0443939a46f232ffda1c018b1c89230a06ff670eadbe540f5e61af211f2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713785887803677671,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 739aac7b3eff66515aa3886c2a1e8a1f,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 55596331,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7c1fe61e-ebaa-4f79-b561-0e1c02216fa1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:46:28 multinode-254635 crio[2862]: time="2024-04-22 11:46:28.255939860Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cee52418-e763-4bc7-a5a4-3fceeb67c735 name=/runtime.v1.RuntimeService/Version
	Apr 22 11:46:28 multinode-254635 crio[2862]: time="2024-04-22 11:46:28.256034804Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cee52418-e763-4bc7-a5a4-3fceeb67c735 name=/runtime.v1.RuntimeService/Version
	Apr 22 11:46:28 multinode-254635 crio[2862]: time="2024-04-22 11:46:28.257675708Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5a216559-9ef5-43df-a3e9-27ff48fdccd7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:46:28 multinode-254635 crio[2862]: time="2024-04-22 11:46:28.258757750Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713786388258649204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a216559-9ef5-43df-a3e9-27ff48fdccd7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:46:28 multinode-254635 crio[2862]: time="2024-04-22 11:46:28.259385084Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d1ee9249-8f10-4f2e-8197-aaa27a267fbe name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:46:28 multinode-254635 crio[2862]: time="2024-04-22 11:46:28.259444984Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d1ee9249-8f10-4f2e-8197-aaa27a267fbe name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:46:28 multinode-254635 crio[2862]: time="2024-04-22 11:46:28.260819608Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4dd7f77a62427cc79a55da96472d737fa62edd2e32f2a59f9eeb95d7e3cee8b4,PodSandboxId:a4f9db39efb9d57360a91d996f7d4fca5f95b3b4b4a62ee2d15ff003863d5196,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713786336130816510,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w6wst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec3be7d9-b316-43ba-8c05-c028f530c07e,},Annotations:map[string]string{io.kubernetes.container.hash: d354c3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:170cc5dfa96c9013a5da4901c9998afe1e59779fda2e4d36d4697b12c1e7dc34,PodSandboxId:98087e18a7cbb0bd82bb75f40cd5ba1782fa267c12b76ca45bf0562e5a25ef38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713786302535530877,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jzhvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 848b349d-906a-411c-a60b-b559d47ad2a7,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4ed2d1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aee9691b01261c2c6b2edb4a38d63b27aa00f3c67567e1829e719016c997dc4,PodSandboxId:f9a67731119452cc2f7e5efd38ec79284dc18238337d6f5aabf1304b57ab67b6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713786302438370128,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-858b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 457a81ab-ca6c-4757-92b1-734ba151216f,},Annotations:map[string]string{io.kubernetes.container.hash: 648abffd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0479f88c8f22fadfd7ca5c88a541baead1e410717b9d83c5f6e8c9c81026cd90,PodSandboxId:a042cf023bf4e218fa5f8b26e1a1b677b8163df69466c1d490db5763d8a85265,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713786302404859701,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mr7rq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a91e327-1478-4e50-9993-de3d5406efaa,},Annotations:map[string]s
tring{io.kubernetes.container.hash: b4fca94f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9048b9918b26e868f7f5bb8d1b1b1f3370ac6cdc2da10608141d9b14c76858e3,PodSandboxId:1ab69924cd55ac55cfdb620fa2405404172a58fbf0cf79574173e4aa7793996e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713786302415868649,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82216f3b-f366-4b55-893a-8f7c1b59372b,},Annotations:map[string]string{io.kub
ernetes.container.hash: 42acbef6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cab8f0adadfda9e02f9436c2cee58b7ec4d68640fc4514df80155477e55ffd7c,PodSandboxId:b376e76e22dd60e25bc96a0bc0f85f1612608a0606260ba99efdc85ce7e34bb8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713786298615575062,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346d24e136b744e11f51aaf0b32cfabc,},Annotations:map[string]string{io.kubernetes.container.hash: ea3273f1,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4867fbb06694830cf22ead69bf0ddd10a883b530a624a5e9b3b78fa115b0bc2,PodSandboxId:c8783ce123c8f90199c1c8c7247f52091596152d1373914113027e71aa5ef328,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713786298590465244,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7dbbfc94d550094389016edf0d994af,},Annotations:map[string]string{io.kubernetes.container.hash: 933c335
1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9631acc3cd8008ea734f6268932041e8e0e08d96b2532faa4be1d1e017eae954,PodSandboxId:fd54210addb9cd6bce92a9348095b27c8b805d8be5d54b8c974fc492fc55dc7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713786298489648163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 739aac7b3eff66515aa3886c2a1e8a1f,},Annotations:map[string]string{io.kubernetes.container.hash: 55596331,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86609797440bf5cb0ebb23673aadaa0c528eea783ee8792fab8e9c928d17a31c,PodSandboxId:af59cf4622968ecd5d9b3cc728998c2bb506c8c387315d19b068a53faf38db31,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713786298509072759,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b4e82c7f0c63c79504c005bee34fab,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d119fbdf20b5c9a472fff8e3b1e684445daab02f1bbdcea33624195a806c4ad,PodSandboxId:a4e2b504f1ee11299039caae119b3822cb2845449805ab51d09c66c02f259520,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713785987940597002,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w6wst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec3be7d9-b316-43ba-8c05-c028f530c07e,},Annotations:map[string]string{io.kubernetes.container.hash: d354c3c,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c87675d112df1f4e12a819757b733862cca0a7eccb55f1d72d483e254ce402,PodSandboxId:68f35a30ef2888e4d8c3443bed165372125c99ec4c90ad00e5788f23829e37a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713785939563513704,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82216f3b-f366-4b55-893a-8f7c1b59372b,},Annotations:map[string]string{io.kubernetes.container.hash: 42acbef6,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5ac398b3838a4544e429afdc4ec699c532240a711df54d0ff54f626894fd3c3,PodSandboxId:424d2c496a3dea7ca547f4fac7ee1fedc8712d6349f046b99e0e60337a89ae4d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713785939558527459,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-858b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 457a81ab-ca6c-4757-92b1-734ba151216f,},Annotations:map[string]string{io.kubernetes.container.hash: 648abffd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70ea62bce313978f143670020dbcaed41edb5279e812840d18fa210fbf68433d,PodSandboxId:f8b84bce701b57e793df270099e01b32c56bdae6c38be83e6b2821890bd56005,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713785908412292124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mr7rq,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3a91e327-1478-4e50-9993-de3d5406efaa,},Annotations:map[string]string{io.kubernetes.container.hash: b4fca94f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd2ac5a2adfeb536099f59cf363bbbde81f2e3983e1d6c18a1f6565651e8ed9,PodSandboxId:5302e172d2439c6f7ab662cf2a92c5a8e3b4fdec4ca08af14bb0900e2b6db4db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713785907875455356,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jzhvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 848b349d-906a-411c-a60b-b
559d47ad2a7,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4ed2d1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07a0b4812dd3b1c6a7f0c82617d26c0ccc45b8b8e1d30d6c318f8bda12735f0b,PodSandboxId:8eee2070b37b5e718fe58a882a7c2dd6170f42ad7a942a9c831405dea835c4b8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713785887893260405,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b4e82c7f0c63c79504c005bee34fab,},A
nnotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3b8493457784233fb659b95632fa92367fa72fc86b760ee436e0ad6468bd664,PodSandboxId:dde06e3757bdcbf4a3df03f5af8d551716129cab83db3c61cd492162196df95f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713785887846208182,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346d24e136b744e11f51aaf0b32cfabc,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: ea3273f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66e29130d9c973c5174eff5a88cb844d52b9fa38ad6333085bb66c3bd155697,PodSandboxId:174bda2a37e43f7a336de9740c6f681c2ffbc130efbd2576cd3af6aa0b68fdb6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713785887816866063,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7dbbfc94d550094389016edf0d994af,},Annotations:map[string]string{io.k
ubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c0d3bf49be403d6755298c16ec74a2883dc6b3e3c8efde6968515c7bc280b9c,PodSandboxId:1cd9c0443939a46f232ffda1c018b1c89230a06ff670eadbe540f5e61af211f2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713785887803677671,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 739aac7b3eff66515aa3886c2a1e8a1f,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 55596331,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d1ee9249-8f10-4f2e-8197-aaa27a267fbe name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4dd7f77a62427       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      52 seconds ago       Running             busybox                   1                   a4f9db39efb9d       busybox-fc5497c4f-w6wst
	170cc5dfa96c9       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               1                   98087e18a7cbb       kindnet-jzhvl
	9aee9691b0126       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   f9a6773111945       coredns-7db6d8ff4d-858b8
	9048b9918b26e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   1ab69924cd55a       storage-provisioner
	0479f88c8f22f       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      About a minute ago   Running             kube-proxy                1                   a042cf023bf4e       kube-proxy-mr7rq
	cab8f0adadfda       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   b376e76e22dd6       etcd-multinode-254635
	a4867fbb06694       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      About a minute ago   Running             kube-controller-manager   1                   c8783ce123c8f       kube-controller-manager-multinode-254635
	86609797440bf       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      About a minute ago   Running             kube-scheduler            1                   af59cf4622968       kube-scheduler-multinode-254635
	9631acc3cd800       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      About a minute ago   Running             kube-apiserver            1                   fd54210addb9c       kube-apiserver-multinode-254635
	8d119fbdf20b5       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   a4e2b504f1ee1       busybox-fc5497c4f-w6wst
	11c87675d112d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   68f35a30ef288       storage-provisioner
	c5ac398b3838a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   424d2c496a3de       coredns-7db6d8ff4d-858b8
	70ea62bce3139       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      7 minutes ago        Exited              kube-proxy                0                   f8b84bce701b5       kube-proxy-mr7rq
	8bd2ac5a2adfe       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      8 minutes ago        Exited              kindnet-cni               0                   5302e172d2439       kindnet-jzhvl
	07a0b4812dd3b       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      8 minutes ago        Exited              kube-scheduler            0                   8eee2070b37b5       kube-scheduler-multinode-254635
	d3b8493457784       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   dde06e3757bdc       etcd-multinode-254635
	d66e29130d9c9       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      8 minutes ago        Exited              kube-controller-manager   0                   174bda2a37e43       kube-controller-manager-multinode-254635
	7c0d3bf49be40       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      8 minutes ago        Exited              kube-apiserver            0                   1cd9c0443939a       kube-apiserver-multinode-254635
	
	
	==> coredns [9aee9691b01261c2c6b2edb4a38d63b27aa00f3c67567e1829e719016c997dc4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56072 - 32659 "HINFO IN 1266163236657129463.659094727874770730. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.029456983s
	
	
	==> coredns [c5ac398b3838a4544e429afdc4ec699c532240a711df54d0ff54f626894fd3c3] <==
	[INFO] 10.244.0.3:53477 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001911859s
	[INFO] 10.244.0.3:43110 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010615s
	[INFO] 10.244.0.3:57365 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000050085s
	[INFO] 10.244.0.3:36553 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001215941s
	[INFO] 10.244.0.3:48767 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000042748s
	[INFO] 10.244.0.3:50007 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000028162s
	[INFO] 10.244.0.3:41159 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000031231s
	[INFO] 10.244.1.2:47904 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184337s
	[INFO] 10.244.1.2:44915 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012092s
	[INFO] 10.244.1.2:41093 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105438s
	[INFO] 10.244.1.2:47657 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105456s
	[INFO] 10.244.0.3:52223 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145371s
	[INFO] 10.244.0.3:41870 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000178913s
	[INFO] 10.244.0.3:41925 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000071881s
	[INFO] 10.244.0.3:39621 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073491s
	[INFO] 10.244.1.2:57097 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139978s
	[INFO] 10.244.1.2:39532 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000227935s
	[INFO] 10.244.1.2:34609 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112344s
	[INFO] 10.244.1.2:33126 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000138362s
	[INFO] 10.244.0.3:60416 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117217s
	[INFO] 10.244.0.3:38982 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00007414s
	[INFO] 10.244.0.3:49474 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000066889s
	[INFO] 10.244.0.3:56944 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000064658s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-254635
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-254635
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437
	                    minikube.k8s.io/name=multinode-254635
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_22T11_38_14_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 11:38:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-254635
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 11:46:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 11:45:01 +0000   Mon, 22 Apr 2024 11:38:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 11:45:01 +0000   Mon, 22 Apr 2024 11:38:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 11:45:01 +0000   Mon, 22 Apr 2024 11:38:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 11:45:01 +0000   Mon, 22 Apr 2024 11:38:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.185
	  Hostname:    multinode-254635
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2f8c402073c446489e978f037232a51b
	  System UUID:                2f8c4020-73c4-4648-9e97-8f037232a51b
	  Boot ID:                    4a2171db-fa95-402b-8c19-b12ba2852d41
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-w6wst                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m44s
	  kube-system                 coredns-7db6d8ff4d-858b8                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m2s
	  kube-system                 etcd-multinode-254635                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m15s
	  kube-system                 kindnet-jzhvl                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m2s
	  kube-system                 kube-apiserver-multinode-254635             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	  kube-system                 kube-controller-manager-multinode-254635    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	  kube-system                 kube-proxy-mr7rq                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m2s
	  kube-system                 kube-scheduler-multinode-254635             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 7m59s              kube-proxy       
	  Normal  Starting                 85s                kube-proxy       
	  Normal  NodeHasSufficientPID     8m15s              kubelet          Node multinode-254635 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m15s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m15s              kubelet          Node multinode-254635 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m15s              kubelet          Node multinode-254635 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m15s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m3s               node-controller  Node multinode-254635 event: Registered Node multinode-254635 in Controller
	  Normal  NodeReady                7m29s              kubelet          Node multinode-254635 status is now: NodeReady
	  Normal  Starting                 91s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  91s (x8 over 91s)  kubelet          Node multinode-254635 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    91s (x8 over 91s)  kubelet          Node multinode-254635 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     91s (x7 over 91s)  kubelet          Node multinode-254635 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  91s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           74s                node-controller  Node multinode-254635 event: Registered Node multinode-254635 in Controller
	
	
	Name:               multinode-254635-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-254635-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437
	                    minikube.k8s.io/name=multinode-254635
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_22T11_45_45_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 11:45:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-254635-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 11:46:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 11:46:16 +0000   Mon, 22 Apr 2024 11:45:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 11:46:16 +0000   Mon, 22 Apr 2024 11:45:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 11:46:16 +0000   Mon, 22 Apr 2024 11:45:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 11:46:16 +0000   Mon, 22 Apr 2024 11:45:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.232
	  Hostname:    multinode-254635-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7f3ae7ac65784a94b9f762b55c80c783
	  System UUID:                7f3ae7ac-6578-4a94-b9f7-62b55c80c783
	  Boot ID:                    288dcc2f-8b41-44e7-a49d-cd4a33ebeeeb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2cvd8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         47s
	  kube-system                 kindnet-4jq8c              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m56s
	  kube-system                 kube-proxy-bkcdv           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m51s                  kube-proxy       
	  Normal  Starting                 38s                    kube-proxy       
	  Normal  NodeHasNoDiskPressure    6m56s (x2 over 6m56s)  kubelet          Node multinode-254635-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m56s (x2 over 6m56s)  kubelet          Node multinode-254635-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m56s (x2 over 6m56s)  kubelet          Node multinode-254635-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 6m56s                  kubelet          Starting kubelet.
	  Normal  NodeReady                6m46s                  kubelet          Node multinode-254635-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  43s (x2 over 43s)      kubelet          Node multinode-254635-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x2 over 43s)      kubelet          Node multinode-254635-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x2 over 43s)      kubelet          Node multinode-254635-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  43s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           39s                    node-controller  Node multinode-254635-m02 event: Registered Node multinode-254635-m02 in Controller
	  Normal  NodeReady                34s                    kubelet          Node multinode-254635-m02 status is now: NodeReady
	
	
	Name:               multinode-254635-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-254635-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437
	                    minikube.k8s.io/name=multinode-254635
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_22T11_46_16_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 11:46:15 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-254635-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 11:46:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 11:46:25 +0000   Mon, 22 Apr 2024 11:46:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 11:46:25 +0000   Mon, 22 Apr 2024 11:46:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 11:46:25 +0000   Mon, 22 Apr 2024 11:46:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 11:46:25 +0000   Mon, 22 Apr 2024 11:46:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.75
	  Hostname:    multinode-254635-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee82292b61d842aab4f6479c52a37304
	  System UUID:                ee82292b-61d8-42aa-b4f6-479c52a37304
	  Boot ID:                    02c75f84-335d-478d-9895-6567a7fbd64a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fsg5v       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m3s
	  kube-system                 kube-proxy-8xngk    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m17s                  kube-proxy       
	  Normal  Starting                 5m59s                  kube-proxy       
	  Normal  Starting                 8s                     kube-proxy       
	  Normal  NodeHasSufficientMemory  6m4s (x2 over 6m4s)    kubelet          Node multinode-254635-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m4s (x2 over 6m4s)    kubelet          Node multinode-254635-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m4s (x2 over 6m4s)    kubelet          Node multinode-254635-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m3s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m54s                  kubelet          Node multinode-254635-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m22s (x2 over 5m22s)  kubelet          Node multinode-254635-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m22s (x2 over 5m22s)  kubelet          Node multinode-254635-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m22s (x2 over 5m22s)  kubelet          Node multinode-254635-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m12s                  kubelet          Node multinode-254635-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  13s (x2 over 13s)      kubelet          Node multinode-254635-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x2 over 13s)      kubelet          Node multinode-254635-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x2 over 13s)      kubelet          Node multinode-254635-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9s                     node-controller  Node multinode-254635-m03 event: Registered Node multinode-254635-m03 in Controller
	  Normal  NodeReady                3s                     kubelet          Node multinode-254635-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.055900] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.175538] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.152802] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.305706] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[Apr22 11:38] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.060726] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.642952] systemd-fstab-generator[964]: Ignoring "noauto" option for root device
	[  +0.065049] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.011996] systemd-fstab-generator[1296]: Ignoring "noauto" option for root device
	[  +0.079352] kauditd_printk_skb: 69 callbacks suppressed
	[ +14.227976] systemd-fstab-generator[1489]: Ignoring "noauto" option for root device
	[  +0.091992] kauditd_printk_skb: 21 callbacks suppressed
	[ +32.270710] kauditd_printk_skb: 60 callbacks suppressed
	[Apr22 11:39] kauditd_printk_skb: 12 callbacks suppressed
	[Apr22 11:44] systemd-fstab-generator[2782]: Ignoring "noauto" option for root device
	[  +0.154353] systemd-fstab-generator[2794]: Ignoring "noauto" option for root device
	[  +0.178213] systemd-fstab-generator[2808]: Ignoring "noauto" option for root device
	[  +0.138656] systemd-fstab-generator[2820]: Ignoring "noauto" option for root device
	[  +0.314062] systemd-fstab-generator[2848]: Ignoring "noauto" option for root device
	[  +0.766210] systemd-fstab-generator[2946]: Ignoring "noauto" option for root device
	[  +2.084725] systemd-fstab-generator[3070]: Ignoring "noauto" option for root device
	[Apr22 11:45] kauditd_printk_skb: 184 callbacks suppressed
	[ +12.451316] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.947292] systemd-fstab-generator[3878]: Ignoring "noauto" option for root device
	[ +17.398344] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [cab8f0adadfda9e02f9436c2cee58b7ec4d68640fc4514df80155477e55ffd7c] <==
	{"level":"info","ts":"2024-04-22T11:44:59.057679Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-22T11:44:59.057739Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-22T11:44:59.057971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d switched to configuration voters=(10357203766055541037)"}
	{"level":"info","ts":"2024-04-22T11:44:59.058053Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e1b99ad77146789e","local-member-id":"8fbc2df34e14192d","added-peer-id":"8fbc2df34e14192d","added-peer-peer-urls":["https://192.168.39.185:2380"]}
	{"level":"info","ts":"2024-04-22T11:44:59.058192Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e1b99ad77146789e","local-member-id":"8fbc2df34e14192d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T11:44:59.058242Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T11:44:59.075662Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-22T11:44:59.077479Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8fbc2df34e14192d","initial-advertise-peer-urls":["https://192.168.39.185:2380"],"listen-peer-urls":["https://192.168.39.185:2380"],"advertise-client-urls":["https://192.168.39.185:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.185:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-22T11:44:59.077541Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-22T11:44:59.077283Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.185:2380"}
	{"level":"info","ts":"2024-04-22T11:44:59.077603Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.185:2380"}
	{"level":"info","ts":"2024-04-22T11:45:00.210464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-22T11:45:00.210546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-22T11:45:00.210605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d received MsgPreVoteResp from 8fbc2df34e14192d at term 2"}
	{"level":"info","ts":"2024-04-22T11:45:00.210629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became candidate at term 3"}
	{"level":"info","ts":"2024-04-22T11:45:00.210669Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d received MsgVoteResp from 8fbc2df34e14192d at term 3"}
	{"level":"info","ts":"2024-04-22T11:45:00.210778Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became leader at term 3"}
	{"level":"info","ts":"2024-04-22T11:45:00.210795Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8fbc2df34e14192d elected leader 8fbc2df34e14192d at term 3"}
	{"level":"info","ts":"2024-04-22T11:45:00.221095Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T11:45:00.223304Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-22T11:45:00.221031Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8fbc2df34e14192d","local-member-attributes":"{Name:multinode-254635 ClientURLs:[https://192.168.39.185:2379]}","request-path":"/0/members/8fbc2df34e14192d/attributes","cluster-id":"e1b99ad77146789e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-22T11:45:00.227909Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T11:45:00.228105Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-22T11:45:00.228148Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-22T11:45:00.233426Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.185:2379"}
	
	
	==> etcd [d3b8493457784233fb659b95632fa92367fa72fc86b760ee436e0ad6468bd664] <==
	{"level":"warn","ts":"2024-04-22T11:40:25.520971Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"275.145009ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1814263479110859697 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/multinode-254635-m03\" mod_revision:598 > success:<request_put:<key:\"/registry/minions/multinode-254635-m03\" value_size:2068 >> failure:<request_range:<key:\"/registry/minions/multinode-254635-m03\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-22T11:40:25.523153Z","caller":"traceutil/trace.go:171","msg":"trace[105130172] transaction","detail":"{read_only:false; number_of_response:1; response_revision:600; }","duration":"523.873692ms","start":"2024-04-22T11:40:24.999263Z","end":"2024-04-22T11:40:25.523136Z","steps":["trace[105130172] 'process raft request'  (duration: 523.77523ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T11:40:25.523261Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-22T11:40:24.999254Z","time spent":"523.962586ms","remote":"127.0.0.1:45880","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":42,"response count":0,"response size":2164,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-254635-m03\" mod_revision:598 > success:<request_put:<key:\"/registry/minions/multinode-254635-m03\" value_size:2036 >> failure:<request_range:<key:\"/registry/minions/multinode-254635-m03\" > >"}
	{"level":"info","ts":"2024-04-22T11:40:25.523191Z","caller":"traceutil/trace.go:171","msg":"trace[888699519] transaction","detail":"{read_only:false; response_revision:601; number_of_response:1; }","duration":"127.062437ms","start":"2024-04-22T11:40:25.396117Z","end":"2024-04-22T11:40:25.523179Z","steps":["trace[888699519] 'process raft request'  (duration: 127.003141ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T11:40:25.523422Z","caller":"traceutil/trace.go:171","msg":"trace[703166275] transaction","detail":"{read_only:false; response_revision:600; number_of_response:1; }","duration":"527.435318ms","start":"2024-04-22T11:40:24.995979Z","end":"2024-04-22T11:40:25.523414Z","steps":["trace[703166275] 'process raft request'  (duration: 249.508369ms)","trace[703166275] 'compare'  (duration: 274.936359ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-22T11:40:25.523496Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-22T11:40:24.99597Z","time spent":"527.501906ms","remote":"127.0.0.1:45880","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2114,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-254635-m03\" mod_revision:598 > success:<request_put:<key:\"/registry/minions/multinode-254635-m03\" value_size:2068 >> failure:<request_range:<key:\"/registry/minions/multinode-254635-m03\" > >"}
	{"level":"info","ts":"2024-04-22T11:40:25.523661Z","caller":"traceutil/trace.go:171","msg":"trace[2071054875] linearizableReadLoop","detail":"{readStateIndex:642; appliedIndex:640; }","duration":"524.328412ms","start":"2024-04-22T11:40:24.999319Z","end":"2024-04-22T11:40:25.523647Z","steps":["trace[2071054875] 'read index received'  (duration: 48.541315ms)","trace[2071054875] 'applied index is now lower than readState.Index'  (duration: 475.786388ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-22T11:40:25.523833Z","caller":"traceutil/trace.go:171","msg":"trace[621235512] transaction","detail":"{read_only:false; number_of_response:1; response_revision:600; }","duration":"524.484699ms","start":"2024-04-22T11:40:24.999339Z","end":"2024-04-22T11:40:25.523824Z","steps":["trace[621235512] 'process raft request'  (duration: 523.751174ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T11:40:25.523875Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-22T11:40:24.999336Z","time spent":"524.513825ms","remote":"127.0.0.1:45880","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":42,"response count":0,"response size":2164,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-254635-m03\" mod_revision:598 > success:<request_put:<key:\"/registry/minions/multinode-254635-m03\" value_size:2033 >> failure:<request_range:<key:\"/registry/minions/multinode-254635-m03\" > >"}
	{"level":"warn","ts":"2024-04-22T11:40:25.524031Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"524.703984ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-22T11:40:25.52409Z","caller":"traceutil/trace.go:171","msg":"trace[327369696] range","detail":"{range_begin:/registry/limitranges/kube-system/; range_end:/registry/limitranges/kube-system0; response_count:0; response_revision:601; }","duration":"524.77683ms","start":"2024-04-22T11:40:24.9993Z","end":"2024-04-22T11:40:25.524077Z","steps":["trace[327369696] 'agreement among raft nodes before linearized reading'  (duration: 524.672554ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T11:40:25.524118Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-22T11:40:24.999292Z","time spent":"524.813771ms","remote":"127.0.0.1:45854","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":0,"response size":29,"request content":"key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" "}
	{"level":"warn","ts":"2024-04-22T11:40:25.524245Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"524.793893ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/kube-node-lease/\" range_end:\"/registry/resourcequotas/kube-node-lease0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-22T11:40:25.524289Z","caller":"traceutil/trace.go:171","msg":"trace[1580525120] range","detail":"{range_begin:/registry/resourcequotas/kube-node-lease/; range_end:/registry/resourcequotas/kube-node-lease0; response_count:0; response_revision:601; }","duration":"524.844997ms","start":"2024-04-22T11:40:24.999438Z","end":"2024-04-22T11:40:25.524283Z","steps":["trace[1580525120] 'agreement among raft nodes before linearized reading'  (duration: 524.790183ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T11:40:25.524309Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-22T11:40:24.999433Z","time spent":"524.86999ms","remote":"127.0.0.1:45804","response type":"/etcdserverpb.KV/Range","request count":0,"request size":86,"response count":0,"response size":29,"request content":"key:\"/registry/resourcequotas/kube-node-lease/\" range_end:\"/registry/resourcequotas/kube-node-lease0\" "}
	{"level":"info","ts":"2024-04-22T11:43:22.682608Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-22T11:43:22.688249Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-254635","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.185:2380"],"advertise-client-urls":["https://192.168.39.185:2379"]}
	{"level":"warn","ts":"2024-04-22T11:43:22.688422Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T11:43:22.688665Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T11:43:22.759992Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.185:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T11:43:22.760056Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.185:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-22T11:43:22.761619Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8fbc2df34e14192d","current-leader-member-id":"8fbc2df34e14192d"}
	{"level":"info","ts":"2024-04-22T11:43:22.765147Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.185:2380"}
	{"level":"info","ts":"2024-04-22T11:43:22.76533Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.185:2380"}
	{"level":"info","ts":"2024-04-22T11:43:22.765374Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-254635","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.185:2380"],"advertise-client-urls":["https://192.168.39.185:2379"]}
	
	
	==> kernel <==
	 11:46:28 up 8 min,  0 users,  load average: 0.50, 0.36, 0.18
	Linux multinode-254635 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [170cc5dfa96c9013a5da4901c9998afe1e59779fda2e4d36d4697b12c1e7dc34] <==
	I0422 11:45:43.498643       1 main.go:250] Node multinode-254635-m03 has CIDR [10.244.3.0/24] 
	I0422 11:45:53.510332       1 main.go:223] Handling node with IPs: map[192.168.39.185:{}]
	I0422 11:45:53.510376       1 main.go:227] handling current node
	I0422 11:45:53.510386       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0422 11:45:53.510393       1 main.go:250] Node multinode-254635-m02 has CIDR [10.244.1.0/24] 
	I0422 11:45:53.510492       1 main.go:223] Handling node with IPs: map[192.168.39.75:{}]
	I0422 11:45:53.510525       1 main.go:250] Node multinode-254635-m03 has CIDR [10.244.3.0/24] 
	I0422 11:46:03.554256       1 main.go:223] Handling node with IPs: map[192.168.39.185:{}]
	I0422 11:46:03.554352       1 main.go:227] handling current node
	I0422 11:46:03.554375       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0422 11:46:03.554393       1 main.go:250] Node multinode-254635-m02 has CIDR [10.244.1.0/24] 
	I0422 11:46:03.554499       1 main.go:223] Handling node with IPs: map[192.168.39.75:{}]
	I0422 11:46:03.554518       1 main.go:250] Node multinode-254635-m03 has CIDR [10.244.3.0/24] 
	I0422 11:46:13.568388       1 main.go:223] Handling node with IPs: map[192.168.39.185:{}]
	I0422 11:46:13.568489       1 main.go:227] handling current node
	I0422 11:46:13.568518       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0422 11:46:13.568594       1 main.go:250] Node multinode-254635-m02 has CIDR [10.244.1.0/24] 
	I0422 11:46:13.568783       1 main.go:223] Handling node with IPs: map[192.168.39.75:{}]
	I0422 11:46:13.568821       1 main.go:250] Node multinode-254635-m03 has CIDR [10.244.3.0/24] 
	I0422 11:46:23.576573       1 main.go:223] Handling node with IPs: map[192.168.39.185:{}]
	I0422 11:46:23.577930       1 main.go:227] handling current node
	I0422 11:46:23.578036       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0422 11:46:23.578070       1 main.go:250] Node multinode-254635-m02 has CIDR [10.244.1.0/24] 
	I0422 11:46:23.578195       1 main.go:223] Handling node with IPs: map[192.168.39.75:{}]
	I0422 11:46:23.578215       1 main.go:250] Node multinode-254635-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [8bd2ac5a2adfeb536099f59cf363bbbde81f2e3983e1d6c18a1f6565651e8ed9] <==
	I0422 11:42:38.963356       1 main.go:250] Node multinode-254635-m03 has CIDR [10.244.3.0/24] 
	I0422 11:42:48.968852       1 main.go:223] Handling node with IPs: map[192.168.39.185:{}]
	I0422 11:42:48.969123       1 main.go:227] handling current node
	I0422 11:42:48.969224       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0422 11:42:48.969332       1 main.go:250] Node multinode-254635-m02 has CIDR [10.244.1.0/24] 
	I0422 11:42:48.969606       1 main.go:223] Handling node with IPs: map[192.168.39.75:{}]
	I0422 11:42:48.969642       1 main.go:250] Node multinode-254635-m03 has CIDR [10.244.3.0/24] 
	I0422 11:42:58.978875       1 main.go:223] Handling node with IPs: map[192.168.39.185:{}]
	I0422 11:42:58.979106       1 main.go:227] handling current node
	I0422 11:42:58.979152       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0422 11:42:58.979173       1 main.go:250] Node multinode-254635-m02 has CIDR [10.244.1.0/24] 
	I0422 11:42:58.979303       1 main.go:223] Handling node with IPs: map[192.168.39.75:{}]
	I0422 11:42:58.979324       1 main.go:250] Node multinode-254635-m03 has CIDR [10.244.3.0/24] 
	I0422 11:43:08.984370       1 main.go:223] Handling node with IPs: map[192.168.39.185:{}]
	I0422 11:43:08.984465       1 main.go:227] handling current node
	I0422 11:43:08.984492       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0422 11:43:08.984518       1 main.go:250] Node multinode-254635-m02 has CIDR [10.244.1.0/24] 
	I0422 11:43:08.984636       1 main.go:223] Handling node with IPs: map[192.168.39.75:{}]
	I0422 11:43:08.984657       1 main.go:250] Node multinode-254635-m03 has CIDR [10.244.3.0/24] 
	I0422 11:43:18.991778       1 main.go:223] Handling node with IPs: map[192.168.39.185:{}]
	I0422 11:43:18.991933       1 main.go:227] handling current node
	I0422 11:43:18.991963       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0422 11:43:18.991982       1 main.go:250] Node multinode-254635-m02 has CIDR [10.244.1.0/24] 
	I0422 11:43:18.992103       1 main.go:223] Handling node with IPs: map[192.168.39.75:{}]
	I0422 11:43:18.992129       1 main.go:250] Node multinode-254635-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [7c0d3bf49be403d6755298c16ec74a2883dc6b3e3c8efde6968515c7bc280b9c] <==
	I0422 11:43:22.687662       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0422 11:43:22.691079       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.712633       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.712830       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.712906       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.712968       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.713012       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.713094       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.713156       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.713206       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.713339       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.713528       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.713635       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.713848       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.713971       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.714031       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.714091       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.714142       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.714194       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.714246       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.714297       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.714363       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.714417       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.714488       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.714552       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [9631acc3cd8008ea734f6268932041e8e0e08d96b2532faa4be1d1e017eae954] <==
	I0422 11:45:01.628170       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0422 11:45:01.646641       1 aggregator.go:165] initial CRD sync complete...
	I0422 11:45:01.646731       1 autoregister_controller.go:141] Starting autoregister controller
	I0422 11:45:01.646740       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0422 11:45:01.649587       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0422 11:45:01.654365       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0422 11:45:01.654423       1 policy_source.go:224] refreshing policies
	I0422 11:45:01.685420       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0422 11:45:01.727572       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0422 11:45:01.727639       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0422 11:45:01.727647       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0422 11:45:01.728325       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0422 11:45:01.730640       1 shared_informer.go:320] Caches are synced for configmaps
	I0422 11:45:01.734226       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0422 11:45:01.734681       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0422 11:45:01.740270       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0422 11:45:01.747403       1 cache.go:39] Caches are synced for autoregister controller
	I0422 11:45:02.559308       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0422 11:45:03.840024       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0422 11:45:03.980507       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0422 11:45:03.997770       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0422 11:45:04.091590       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0422 11:45:04.101431       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0422 11:45:14.698505       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0422 11:45:14.746108       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [a4867fbb06694830cf22ead69bf0ddd10a883b530a624a5e9b3b78fa115b0bc2] <==
	I0422 11:45:15.280521       1 shared_informer.go:320] Caches are synced for garbage collector
	I0422 11:45:15.280586       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0422 11:45:41.219836       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.939761ms"
	I0422 11:45:41.236882       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.360435ms"
	I0422 11:45:41.237247       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.392µs"
	I0422 11:45:41.238636       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="219.771µs"
	I0422 11:45:44.698808       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="137.54µs"
	I0422 11:45:45.388621       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-254635-m02\" does not exist"
	I0422 11:45:45.401068       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-254635-m02" podCIDRs=["10.244.1.0/24"]
	I0422 11:45:47.287792       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="173.663µs"
	I0422 11:45:47.330115       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.301µs"
	I0422 11:45:47.342257       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.482µs"
	I0422 11:45:47.357514       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.082µs"
	I0422 11:45:47.364980       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.347µs"
	I0422 11:45:47.370784       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="119.382µs"
	I0422 11:45:54.631951       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-254635-m02"
	I0422 11:45:54.662344       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.643µs"
	I0422 11:45:54.678142       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.786µs"
	I0422 11:45:57.696630       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.068468ms"
	I0422 11:45:57.697476       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="100.976µs"
	I0422 11:46:14.343813       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-254635-m02"
	I0422 11:46:15.681229       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-254635-m02"
	I0422 11:46:15.681555       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-254635-m03\" does not exist"
	I0422 11:46:15.707872       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-254635-m03" podCIDRs=["10.244.2.0/24"]
	I0422 11:46:25.459145       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-254635-m03"
	
	
	==> kube-controller-manager [d66e29130d9c973c5174eff5a88cb844d52b9fa38ad6333085bb66c3bd155697] <==
	I0422 11:39:32.444416       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-254635-m02\" does not exist"
	I0422 11:39:32.458217       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-254635-m02" podCIDRs=["10.244.1.0/24"]
	I0422 11:39:35.905969       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-254635-m02"
	I0422 11:39:42.562419       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-254635-m02"
	I0422 11:39:44.796369       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.108493ms"
	I0422 11:39:44.820377       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.94515ms"
	I0422 11:39:44.836954       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.479372ms"
	I0422 11:39:44.837099       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="88.2µs"
	I0422 11:39:48.335005       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.519124ms"
	I0422 11:39:48.335581       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.085µs"
	I0422 11:39:48.518329       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.003965ms"
	I0422 11:39:48.519176       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.724µs"
	I0422 11:40:24.988988       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-254635-m03\" does not exist"
	I0422 11:40:24.989079       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-254635-m02"
	I0422 11:40:25.671074       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-254635-m03" podCIDRs=["10.244.2.0/24"]
	I0422 11:40:25.927612       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-254635-m03"
	I0422 11:40:34.119904       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-254635-m02"
	I0422 11:41:05.380402       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-254635-m02"
	I0422 11:41:06.480929       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-254635-m03\" does not exist"
	I0422 11:41:06.480997       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-254635-m02"
	I0422 11:41:06.494917       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-254635-m03" podCIDRs=["10.244.3.0/24"]
	I0422 11:41:16.043801       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-254635-m02"
	I0422 11:41:55.983460       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-254635-m03"
	I0422 11:41:56.041678       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.133166ms"
	I0422 11:41:56.041940       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.289µs"
	
	
	==> kube-proxy [0479f88c8f22fadfd7ca5c88a541baead1e410717b9d83c5f6e8c9c81026cd90] <==
	I0422 11:45:02.733443       1 server_linux.go:69] "Using iptables proxy"
	I0422 11:45:02.747991       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.185"]
	I0422 11:45:02.873869       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 11:45:02.873972       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 11:45:02.873992       1 server_linux.go:165] "Using iptables Proxier"
	I0422 11:45:02.880012       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 11:45:02.880293       1 server.go:872] "Version info" version="v1.30.0"
	I0422 11:45:02.880342       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 11:45:02.887348       1 config.go:192] "Starting service config controller"
	I0422 11:45:02.887393       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 11:45:02.887419       1 config.go:101] "Starting endpoint slice config controller"
	I0422 11:45:02.887423       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 11:45:02.887458       1 config.go:319] "Starting node config controller"
	I0422 11:45:02.887488       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 11:45:02.987818       1 shared_informer.go:320] Caches are synced for node config
	I0422 11:45:02.987872       1 shared_informer.go:320] Caches are synced for service config
	I0422 11:45:02.987891       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [70ea62bce313978f143670020dbcaed41edb5279e812840d18fa210fbf68433d] <==
	I0422 11:38:28.565657       1 server_linux.go:69] "Using iptables proxy"
	I0422 11:38:28.577360       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.185"]
	I0422 11:38:28.629123       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 11:38:28.629185       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 11:38:28.629201       1 server_linux.go:165] "Using iptables Proxier"
	I0422 11:38:28.632427       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 11:38:28.632783       1 server.go:872] "Version info" version="v1.30.0"
	I0422 11:38:28.633026       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 11:38:28.634161       1 config.go:192] "Starting service config controller"
	I0422 11:38:28.634209       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 11:38:28.634232       1 config.go:101] "Starting endpoint slice config controller"
	I0422 11:38:28.634235       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 11:38:28.634802       1 config.go:319] "Starting node config controller"
	I0422 11:38:28.634834       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 11:38:28.734287       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0422 11:38:28.734472       1 shared_informer.go:320] Caches are synced for service config
	I0422 11:38:28.735053       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [07a0b4812dd3b1c6a7f0c82617d26c0ccc45b8b8e1d30d6c318f8bda12735f0b] <==
	E0422 11:38:11.518868       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0422 11:38:11.537909       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0422 11:38:11.538024       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0422 11:38:11.558372       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0422 11:38:11.558428       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0422 11:38:11.669188       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0422 11:38:11.671211       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0422 11:38:11.713377       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0422 11:38:11.713437       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0422 11:38:11.720441       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0422 11:38:11.722328       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0422 11:38:11.740314       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0422 11:38:11.740791       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0422 11:38:11.751633       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0422 11:38:11.751790       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0422 11:38:11.812154       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0422 11:38:11.812323       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0422 11:38:11.845280       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0422 11:38:11.845540       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0422 11:38:11.849519       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0422 11:38:11.849791       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0422 11:38:11.954544       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0422 11:38:11.954597       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0422 11:38:13.821796       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0422 11:43:22.683460       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [86609797440bf5cb0ebb23673aadaa0c528eea783ee8792fab8e9c928d17a31c] <==
	I0422 11:44:59.763394       1 serving.go:380] Generated self-signed cert in-memory
	W0422 11:45:01.579123       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0422 11:45:01.579263       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0422 11:45:01.579298       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0422 11:45:01.579403       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0422 11:45:01.637396       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0422 11:45:01.637889       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 11:45:01.649918       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0422 11:45:01.652761       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0422 11:45:01.652782       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0422 11:45:01.652791       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0422 11:45:01.754772       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 22 11:44:58 multinode-254635 kubelet[3078]: E0422 11:44:58.890859    3078 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.185:8443: connect: connection refused
	Apr 22 11:44:59 multinode-254635 kubelet[3078]: I0422 11:44:59.329622    3078 kubelet_node_status.go:73] "Attempting to register node" node="multinode-254635"
	Apr 22 11:45:01 multinode-254635 kubelet[3078]: I0422 11:45:01.705880    3078 kubelet_node_status.go:112] "Node was previously registered" node="multinode-254635"
	Apr 22 11:45:01 multinode-254635 kubelet[3078]: I0422 11:45:01.706271    3078 kubelet_node_status.go:76] "Successfully registered node" node="multinode-254635"
	Apr 22 11:45:01 multinode-254635 kubelet[3078]: I0422 11:45:01.709123    3078 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 22 11:45:01 multinode-254635 kubelet[3078]: I0422 11:45:01.711798    3078 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 22 11:45:01 multinode-254635 kubelet[3078]: I0422 11:45:01.787638    3078 apiserver.go:52] "Watching apiserver"
	Apr 22 11:45:01 multinode-254635 kubelet[3078]: I0422 11:45:01.793311    3078 topology_manager.go:215] "Topology Admit Handler" podUID="848b349d-906a-411c-a60b-b559d47ad2a7" podNamespace="kube-system" podName="kindnet-jzhvl"
	Apr 22 11:45:01 multinode-254635 kubelet[3078]: I0422 11:45:01.793482    3078 topology_manager.go:215] "Topology Admit Handler" podUID="3a91e327-1478-4e50-9993-de3d5406efaa" podNamespace="kube-system" podName="kube-proxy-mr7rq"
	Apr 22 11:45:01 multinode-254635 kubelet[3078]: I0422 11:45:01.793531    3078 topology_manager.go:215] "Topology Admit Handler" podUID="457a81ab-ca6c-4757-92b1-734ba151216f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-858b8"
	Apr 22 11:45:01 multinode-254635 kubelet[3078]: I0422 11:45:01.793620    3078 topology_manager.go:215] "Topology Admit Handler" podUID="82216f3b-f366-4b55-893a-8f7c1b59372b" podNamespace="kube-system" podName="storage-provisioner"
	Apr 22 11:45:01 multinode-254635 kubelet[3078]: I0422 11:45:01.793677    3078 topology_manager.go:215] "Topology Admit Handler" podUID="ec3be7d9-b316-43ba-8c05-c028f530c07e" podNamespace="default" podName="busybox-fc5497c4f-w6wst"
	Apr 22 11:45:01 multinode-254635 kubelet[3078]: I0422 11:45:01.811639    3078 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 22 11:45:01 multinode-254635 kubelet[3078]: I0422 11:45:01.885534    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/82216f3b-f366-4b55-893a-8f7c1b59372b-tmp\") pod \"storage-provisioner\" (UID: \"82216f3b-f366-4b55-893a-8f7c1b59372b\") " pod="kube-system/storage-provisioner"
	Apr 22 11:45:01 multinode-254635 kubelet[3078]: I0422 11:45:01.886280    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/848b349d-906a-411c-a60b-b559d47ad2a7-lib-modules\") pod \"kindnet-jzhvl\" (UID: \"848b349d-906a-411c-a60b-b559d47ad2a7\") " pod="kube-system/kindnet-jzhvl"
	Apr 22 11:45:01 multinode-254635 kubelet[3078]: I0422 11:45:01.886398    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/848b349d-906a-411c-a60b-b559d47ad2a7-cni-cfg\") pod \"kindnet-jzhvl\" (UID: \"848b349d-906a-411c-a60b-b559d47ad2a7\") " pod="kube-system/kindnet-jzhvl"
	Apr 22 11:45:01 multinode-254635 kubelet[3078]: I0422 11:45:01.886444    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/848b349d-906a-411c-a60b-b559d47ad2a7-xtables-lock\") pod \"kindnet-jzhvl\" (UID: \"848b349d-906a-411c-a60b-b559d47ad2a7\") " pod="kube-system/kindnet-jzhvl"
	Apr 22 11:45:01 multinode-254635 kubelet[3078]: I0422 11:45:01.886491    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a91e327-1478-4e50-9993-de3d5406efaa-lib-modules\") pod \"kube-proxy-mr7rq\" (UID: \"3a91e327-1478-4e50-9993-de3d5406efaa\") " pod="kube-system/kube-proxy-mr7rq"
	Apr 22 11:45:01 multinode-254635 kubelet[3078]: I0422 11:45:01.886561    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a91e327-1478-4e50-9993-de3d5406efaa-xtables-lock\") pod \"kube-proxy-mr7rq\" (UID: \"3a91e327-1478-4e50-9993-de3d5406efaa\") " pod="kube-system/kube-proxy-mr7rq"
	Apr 22 11:45:09 multinode-254635 kubelet[3078]: I0422 11:45:09.239575    3078 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 22 11:45:57 multinode-254635 kubelet[3078]: E0422 11:45:57.841186    3078 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 11:45:57 multinode-254635 kubelet[3078]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 11:45:57 multinode-254635 kubelet[3078]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 11:45:57 multinode-254635 kubelet[3078]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 11:45:57 multinode-254635 kubelet[3078]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 11:46:27.789071   47723 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18711-7633/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-254635 -n multinode-254635
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-254635 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (310.75s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 stop
E0422 11:46:57.324055   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-254635 stop: exit status 82 (2m0.47548544s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-254635-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-254635 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-254635 status: exit status 3 (18.710722675s)

                                                
                                                
-- stdout --
	multinode-254635
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-254635-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 11:48:51.145084   48389 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host
	E0422 11:48:51.145118   48389 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-254635 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-254635 -n multinode-254635
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-254635 logs -n 25: (1.641970451s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-254635 ssh -n                                                                 | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | multinode-254635-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-254635 cp multinode-254635-m02:/home/docker/cp-test.txt                       | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | multinode-254635:/home/docker/cp-test_multinode-254635-m02_multinode-254635.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-254635 ssh -n                                                                 | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | multinode-254635-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-254635 ssh -n multinode-254635 sudo cat                                       | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | /home/docker/cp-test_multinode-254635-m02_multinode-254635.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-254635 cp multinode-254635-m02:/home/docker/cp-test.txt                       | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | multinode-254635-m03:/home/docker/cp-test_multinode-254635-m02_multinode-254635-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-254635 ssh -n                                                                 | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | multinode-254635-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-254635 ssh -n multinode-254635-m03 sudo cat                                   | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | /home/docker/cp-test_multinode-254635-m02_multinode-254635-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-254635 cp testdata/cp-test.txt                                                | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | multinode-254635-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-254635 ssh -n                                                                 | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | multinode-254635-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-254635 cp multinode-254635-m03:/home/docker/cp-test.txt                       | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile714579271/001/cp-test_multinode-254635-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-254635 ssh -n                                                                 | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | multinode-254635-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-254635 cp multinode-254635-m03:/home/docker/cp-test.txt                       | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | multinode-254635:/home/docker/cp-test_multinode-254635-m03_multinode-254635.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-254635 ssh -n                                                                 | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | multinode-254635-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-254635 ssh -n multinode-254635 sudo cat                                       | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | /home/docker/cp-test_multinode-254635-m03_multinode-254635.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-254635 cp multinode-254635-m03:/home/docker/cp-test.txt                       | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | multinode-254635-m02:/home/docker/cp-test_multinode-254635-m03_multinode-254635-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-254635 ssh -n                                                                 | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | multinode-254635-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-254635 ssh -n multinode-254635-m02 sudo cat                                   | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	|         | /home/docker/cp-test_multinode-254635-m03_multinode-254635-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-254635 node stop m03                                                          | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:40 UTC |
	| node    | multinode-254635 node start                                                             | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:40 UTC | 22 Apr 24 11:41 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-254635                                                                | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:41 UTC |                     |
	| stop    | -p multinode-254635                                                                     | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:41 UTC |                     |
	| start   | -p multinode-254635                                                                     | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:43 UTC | 22 Apr 24 11:46 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-254635                                                                | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:46 UTC |                     |
	| node    | multinode-254635 node delete                                                            | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:46 UTC | 22 Apr 24 11:46 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-254635 stop                                                                   | multinode-254635 | jenkins | v1.33.0 | 22 Apr 24 11:46 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 11:43:21
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 11:43:21.817508   46587 out.go:291] Setting OutFile to fd 1 ...
	I0422 11:43:21.817634   46587 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:43:21.817644   46587 out.go:304] Setting ErrFile to fd 2...
	I0422 11:43:21.817649   46587 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:43:21.817854   46587 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
	I0422 11:43:21.818393   46587 out.go:298] Setting JSON to false
	I0422 11:43:21.819370   46587 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5145,"bootTime":1713781057,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 11:43:21.819425   46587 start.go:139] virtualization: kvm guest
	I0422 11:43:21.822027   46587 out.go:177] * [multinode-254635] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 11:43:21.823920   46587 out.go:177]   - MINIKUBE_LOCATION=18711
	I0422 11:43:21.825792   46587 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 11:43:21.823935   46587 notify.go:220] Checking for updates...
	I0422 11:43:21.828554   46587 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18711-7633/kubeconfig
	I0422 11:43:21.830102   46587 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 11:43:21.831396   46587 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 11:43:21.832735   46587 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 11:43:21.834521   46587 config.go:182] Loaded profile config "multinode-254635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:43:21.834641   46587 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 11:43:21.835102   46587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:43:21.835147   46587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:43:21.850203   46587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37641
	I0422 11:43:21.850629   46587 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:43:21.851103   46587 main.go:141] libmachine: Using API Version  1
	I0422 11:43:21.851125   46587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:43:21.851534   46587 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:43:21.851743   46587 main.go:141] libmachine: (multinode-254635) Calling .DriverName
	I0422 11:43:21.888032   46587 out.go:177] * Using the kvm2 driver based on existing profile
	I0422 11:43:21.889327   46587 start.go:297] selected driver: kvm2
	I0422 11:43:21.889339   46587 start.go:901] validating driver "kvm2" against &{Name:multinode-254635 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterNam
e:multinode-254635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.75 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget
:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 11:43:21.889469   46587 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 11:43:21.889765   46587 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 11:43:21.889829   46587 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18711-7633/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 11:43:21.903976   46587 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 11:43:21.905022   46587 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 11:43:21.905126   46587 cni.go:84] Creating CNI manager for ""
	I0422 11:43:21.905137   46587 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0422 11:43:21.905238   46587 start.go:340] cluster config:
	{Name:multinode-254635 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-254635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.75 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubev
irt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 11:43:21.905519   46587 iso.go:125] acquiring lock: {Name:mkb6ac9fd17ffabc92a94047094130aad6203a95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 11:43:21.908043   46587 out.go:177] * Starting "multinode-254635" primary control-plane node in "multinode-254635" cluster
	I0422 11:43:21.909519   46587 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 11:43:21.909563   46587 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0422 11:43:21.909578   46587 cache.go:56] Caching tarball of preloaded images
	I0422 11:43:21.909662   46587 preload.go:173] Found /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 11:43:21.909677   46587 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 11:43:21.909836   46587 profile.go:143] Saving config to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/multinode-254635/config.json ...
	I0422 11:43:21.910075   46587 start.go:360] acquireMachinesLock for multinode-254635: {Name:mk5cb9b294e703b264c1f97ac968ffd01e93b576 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 11:43:21.910130   46587 start.go:364] duration metric: took 30.53µs to acquireMachinesLock for "multinode-254635"
	I0422 11:43:21.910150   46587 start.go:96] Skipping create...Using existing machine configuration
	I0422 11:43:21.910158   46587 fix.go:54] fixHost starting: 
	I0422 11:43:21.910437   46587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:43:21.910460   46587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:43:21.924478   46587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34175
	I0422 11:43:21.924915   46587 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:43:21.925413   46587 main.go:141] libmachine: Using API Version  1
	I0422 11:43:21.925431   46587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:43:21.925762   46587 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:43:21.925955   46587 main.go:141] libmachine: (multinode-254635) Calling .DriverName
	I0422 11:43:21.926085   46587 main.go:141] libmachine: (multinode-254635) Calling .GetState
	I0422 11:43:21.927583   46587 fix.go:112] recreateIfNeeded on multinode-254635: state=Running err=<nil>
	W0422 11:43:21.927607   46587 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 11:43:21.930764   46587 out.go:177] * Updating the running kvm2 "multinode-254635" VM ...
	I0422 11:43:21.932126   46587 machine.go:94] provisionDockerMachine start ...
	I0422 11:43:21.932151   46587 main.go:141] libmachine: (multinode-254635) Calling .DriverName
	I0422 11:43:21.932355   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHHostname
	I0422 11:43:21.934919   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:43:21.935372   46587 main.go:141] libmachine: (multinode-254635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:1f:f6", ip: ""} in network mk-multinode-254635: {Iface:virbr1 ExpiryTime:2024-04-22 12:37:44 +0000 UTC Type:0 Mac:52:54:00:e2:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-254635 Clientid:01:52:54:00:e2:1f:f6}
	I0422 11:43:21.935399   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined IP address 192.168.39.185 and MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:43:21.935563   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHPort
	I0422 11:43:21.935752   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHKeyPath
	I0422 11:43:21.935911   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHKeyPath
	I0422 11:43:21.936048   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHUsername
	I0422 11:43:21.936190   46587 main.go:141] libmachine: Using SSH client type: native
	I0422 11:43:21.936362   46587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0422 11:43:21.936373   46587 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 11:43:22.046603   46587 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-254635
	
	I0422 11:43:22.046635   46587 main.go:141] libmachine: (multinode-254635) Calling .GetMachineName
	I0422 11:43:22.046887   46587 buildroot.go:166] provisioning hostname "multinode-254635"
	I0422 11:43:22.046914   46587 main.go:141] libmachine: (multinode-254635) Calling .GetMachineName
	I0422 11:43:22.047110   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHHostname
	I0422 11:43:22.050100   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:43:22.050513   46587 main.go:141] libmachine: (multinode-254635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:1f:f6", ip: ""} in network mk-multinode-254635: {Iface:virbr1 ExpiryTime:2024-04-22 12:37:44 +0000 UTC Type:0 Mac:52:54:00:e2:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-254635 Clientid:01:52:54:00:e2:1f:f6}
	I0422 11:43:22.050533   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined IP address 192.168.39.185 and MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:43:22.050668   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHPort
	I0422 11:43:22.050855   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHKeyPath
	I0422 11:43:22.051002   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHKeyPath
	I0422 11:43:22.051266   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHUsername
	I0422 11:43:22.051419   46587 main.go:141] libmachine: Using SSH client type: native
	I0422 11:43:22.051584   46587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0422 11:43:22.051597   46587 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-254635 && echo "multinode-254635" | sudo tee /etc/hostname
	I0422 11:43:22.180915   46587 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-254635
	
	I0422 11:43:22.180944   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHHostname
	I0422 11:43:22.183744   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:43:22.184156   46587 main.go:141] libmachine: (multinode-254635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:1f:f6", ip: ""} in network mk-multinode-254635: {Iface:virbr1 ExpiryTime:2024-04-22 12:37:44 +0000 UTC Type:0 Mac:52:54:00:e2:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-254635 Clientid:01:52:54:00:e2:1f:f6}
	I0422 11:43:22.184182   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined IP address 192.168.39.185 and MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:43:22.184398   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHPort
	I0422 11:43:22.184628   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHKeyPath
	I0422 11:43:22.184801   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHKeyPath
	I0422 11:43:22.184948   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHUsername
	I0422 11:43:22.185170   46587 main.go:141] libmachine: Using SSH client type: native
	I0422 11:43:22.185413   46587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0422 11:43:22.185440   46587 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-254635' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-254635/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-254635' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 11:43:22.294704   46587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 11:43:22.294731   46587 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18711-7633/.minikube CaCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18711-7633/.minikube}
	I0422 11:43:22.294746   46587 buildroot.go:174] setting up certificates
	I0422 11:43:22.294753   46587 provision.go:84] configureAuth start
	I0422 11:43:22.294761   46587 main.go:141] libmachine: (multinode-254635) Calling .GetMachineName
	I0422 11:43:22.295006   46587 main.go:141] libmachine: (multinode-254635) Calling .GetIP
	I0422 11:43:22.297654   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:43:22.298082   46587 main.go:141] libmachine: (multinode-254635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:1f:f6", ip: ""} in network mk-multinode-254635: {Iface:virbr1 ExpiryTime:2024-04-22 12:37:44 +0000 UTC Type:0 Mac:52:54:00:e2:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-254635 Clientid:01:52:54:00:e2:1f:f6}
	I0422 11:43:22.298115   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined IP address 192.168.39.185 and MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:43:22.298192   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHHostname
	I0422 11:43:22.300209   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:43:22.300548   46587 main.go:141] libmachine: (multinode-254635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:1f:f6", ip: ""} in network mk-multinode-254635: {Iface:virbr1 ExpiryTime:2024-04-22 12:37:44 +0000 UTC Type:0 Mac:52:54:00:e2:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-254635 Clientid:01:52:54:00:e2:1f:f6}
	I0422 11:43:22.300586   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined IP address 192.168.39.185 and MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:43:22.300738   46587 provision.go:143] copyHostCerts
	I0422 11:43:22.300785   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem
	I0422 11:43:22.300822   46587 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem, removing ...
	I0422 11:43:22.300833   46587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem
	I0422 11:43:22.300915   46587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem (1078 bytes)
	I0422 11:43:22.301029   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem
	I0422 11:43:22.301058   46587 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem, removing ...
	I0422 11:43:22.301064   46587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem
	I0422 11:43:22.301107   46587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem (1123 bytes)
	I0422 11:43:22.301178   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem
	I0422 11:43:22.301208   46587 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem, removing ...
	I0422 11:43:22.301213   46587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem
	I0422 11:43:22.301246   46587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem (1679 bytes)
	I0422 11:43:22.301299   46587 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem org=jenkins.multinode-254635 san=[127.0.0.1 192.168.39.185 localhost minikube multinode-254635]
	I0422 11:43:22.364528   46587 provision.go:177] copyRemoteCerts
	I0422 11:43:22.364603   46587 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 11:43:22.364632   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHHostname
	I0422 11:43:22.367401   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:43:22.367781   46587 main.go:141] libmachine: (multinode-254635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:1f:f6", ip: ""} in network mk-multinode-254635: {Iface:virbr1 ExpiryTime:2024-04-22 12:37:44 +0000 UTC Type:0 Mac:52:54:00:e2:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-254635 Clientid:01:52:54:00:e2:1f:f6}
	I0422 11:43:22.367808   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined IP address 192.168.39.185 and MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:43:22.368014   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHPort
	I0422 11:43:22.368197   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHKeyPath
	I0422 11:43:22.368413   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHUsername
	I0422 11:43:22.368559   46587 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/multinode-254635/id_rsa Username:docker}
	I0422 11:43:22.456834   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0422 11:43:22.456901   46587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 11:43:22.485546   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0422 11:43:22.485651   46587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0422 11:43:22.513757   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0422 11:43:22.513838   46587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0422 11:43:22.541468   46587 provision.go:87] duration metric: took 246.700467ms to configureAuth
	I0422 11:43:22.541499   46587 buildroot.go:189] setting minikube options for container-runtime
	I0422 11:43:22.541760   46587 config.go:182] Loaded profile config "multinode-254635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:43:22.541856   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHHostname
	I0422 11:43:22.544518   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:43:22.544932   46587 main.go:141] libmachine: (multinode-254635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:1f:f6", ip: ""} in network mk-multinode-254635: {Iface:virbr1 ExpiryTime:2024-04-22 12:37:44 +0000 UTC Type:0 Mac:52:54:00:e2:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-254635 Clientid:01:52:54:00:e2:1f:f6}
	I0422 11:43:22.544958   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined IP address 192.168.39.185 and MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:43:22.545190   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHPort
	I0422 11:43:22.545410   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHKeyPath
	I0422 11:43:22.545582   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHKeyPath
	I0422 11:43:22.545721   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHUsername
	I0422 11:43:22.545870   46587 main.go:141] libmachine: Using SSH client type: native
	I0422 11:43:22.546052   46587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0422 11:43:22.546074   46587 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 11:44:53.249886   46587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 11:44:53.249910   46587 machine.go:97] duration metric: took 1m31.317768662s to provisionDockerMachine
	I0422 11:44:53.249923   46587 start.go:293] postStartSetup for "multinode-254635" (driver="kvm2")
	I0422 11:44:53.249933   46587 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 11:44:53.249954   46587 main.go:141] libmachine: (multinode-254635) Calling .DriverName
	I0422 11:44:53.250265   46587 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 11:44:53.250286   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHHostname
	I0422 11:44:53.253445   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:44:53.253933   46587 main.go:141] libmachine: (multinode-254635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:1f:f6", ip: ""} in network mk-multinode-254635: {Iface:virbr1 ExpiryTime:2024-04-22 12:37:44 +0000 UTC Type:0 Mac:52:54:00:e2:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-254635 Clientid:01:52:54:00:e2:1f:f6}
	I0422 11:44:53.253973   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined IP address 192.168.39.185 and MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:44:53.254106   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHPort
	I0422 11:44:53.254297   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHKeyPath
	I0422 11:44:53.254470   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHUsername
	I0422 11:44:53.254600   46587 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/multinode-254635/id_rsa Username:docker}
	I0422 11:44:53.342751   46587 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 11:44:53.348163   46587 command_runner.go:130] > NAME=Buildroot
	I0422 11:44:53.348187   46587 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0422 11:44:53.348193   46587 command_runner.go:130] > ID=buildroot
	I0422 11:44:53.348200   46587 command_runner.go:130] > VERSION_ID=2023.02.9
	I0422 11:44:53.348207   46587 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0422 11:44:53.348245   46587 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 11:44:53.348260   46587 filesync.go:126] Scanning /home/jenkins/minikube-integration/18711-7633/.minikube/addons for local assets ...
	I0422 11:44:53.348322   46587 filesync.go:126] Scanning /home/jenkins/minikube-integration/18711-7633/.minikube/files for local assets ...
	I0422 11:44:53.348415   46587 filesync.go:149] local asset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> 149452.pem in /etc/ssl/certs
	I0422 11:44:53.348427   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> /etc/ssl/certs/149452.pem
	I0422 11:44:53.348585   46587 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 11:44:53.360192   46587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem --> /etc/ssl/certs/149452.pem (1708 bytes)
	I0422 11:44:53.387595   46587 start.go:296] duration metric: took 137.660281ms for postStartSetup
	I0422 11:44:53.387630   46587 fix.go:56] duration metric: took 1m31.477473792s for fixHost
	I0422 11:44:53.387655   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHHostname
	I0422 11:44:53.390550   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:44:53.390991   46587 main.go:141] libmachine: (multinode-254635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:1f:f6", ip: ""} in network mk-multinode-254635: {Iface:virbr1 ExpiryTime:2024-04-22 12:37:44 +0000 UTC Type:0 Mac:52:54:00:e2:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-254635 Clientid:01:52:54:00:e2:1f:f6}
	I0422 11:44:53.391013   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined IP address 192.168.39.185 and MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:44:53.391250   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHPort
	I0422 11:44:53.391469   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHKeyPath
	I0422 11:44:53.391632   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHKeyPath
	I0422 11:44:53.391822   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHUsername
	I0422 11:44:53.391994   46587 main.go:141] libmachine: Using SSH client type: native
	I0422 11:44:53.392143   46587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0422 11:44:53.392153   46587 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 11:44:53.498148   46587 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713786293.486659664
	
	I0422 11:44:53.498178   46587 fix.go:216] guest clock: 1713786293.486659664
	I0422 11:44:53.498186   46587 fix.go:229] Guest: 2024-04-22 11:44:53.486659664 +0000 UTC Remote: 2024-04-22 11:44:53.387634623 +0000 UTC m=+91.623189634 (delta=99.025041ms)
	I0422 11:44:53.498224   46587 fix.go:200] guest clock delta is within tolerance: 99.025041ms
	I0422 11:44:53.498229   46587 start.go:83] releasing machines lock for "multinode-254635", held for 1m31.588085968s
	I0422 11:44:53.498251   46587 main.go:141] libmachine: (multinode-254635) Calling .DriverName
	I0422 11:44:53.498527   46587 main.go:141] libmachine: (multinode-254635) Calling .GetIP
	I0422 11:44:53.501308   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:44:53.501757   46587 main.go:141] libmachine: (multinode-254635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:1f:f6", ip: ""} in network mk-multinode-254635: {Iface:virbr1 ExpiryTime:2024-04-22 12:37:44 +0000 UTC Type:0 Mac:52:54:00:e2:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-254635 Clientid:01:52:54:00:e2:1f:f6}
	I0422 11:44:53.501780   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined IP address 192.168.39.185 and MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:44:53.501928   46587 main.go:141] libmachine: (multinode-254635) Calling .DriverName
	I0422 11:44:53.502490   46587 main.go:141] libmachine: (multinode-254635) Calling .DriverName
	I0422 11:44:53.502700   46587 main.go:141] libmachine: (multinode-254635) Calling .DriverName
	I0422 11:44:53.502786   46587 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 11:44:53.502826   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHHostname
	I0422 11:44:53.502924   46587 ssh_runner.go:195] Run: cat /version.json
	I0422 11:44:53.502953   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHHostname
	I0422 11:44:53.505403   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:44:53.505756   46587 main.go:141] libmachine: (multinode-254635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:1f:f6", ip: ""} in network mk-multinode-254635: {Iface:virbr1 ExpiryTime:2024-04-22 12:37:44 +0000 UTC Type:0 Mac:52:54:00:e2:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-254635 Clientid:01:52:54:00:e2:1f:f6}
	I0422 11:44:53.505781   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined IP address 192.168.39.185 and MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:44:53.505800   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:44:53.505944   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHPort
	I0422 11:44:53.506103   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHKeyPath
	I0422 11:44:53.506238   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHUsername
	I0422 11:44:53.506262   46587 main.go:141] libmachine: (multinode-254635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:1f:f6", ip: ""} in network mk-multinode-254635: {Iface:virbr1 ExpiryTime:2024-04-22 12:37:44 +0000 UTC Type:0 Mac:52:54:00:e2:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-254635 Clientid:01:52:54:00:e2:1f:f6}
	I0422 11:44:53.506300   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined IP address 192.168.39.185 and MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:44:53.506412   46587 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/multinode-254635/id_rsa Username:docker}
	I0422 11:44:53.506486   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHPort
	I0422 11:44:53.506655   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHKeyPath
	I0422 11:44:53.506790   46587 main.go:141] libmachine: (multinode-254635) Calling .GetSSHUsername
	I0422 11:44:53.506926   46587 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/multinode-254635/id_rsa Username:docker}
	I0422 11:44:53.586540   46587 command_runner.go:130] > {"iso_version": "v1.33.0", "kicbase_version": "v0.0.43-1713236840-18649", "minikube_version": "v1.33.0", "commit": "4bd203f0c710e7fdd30539846cf2bc6624a2556d"}
	I0422 11:44:53.586678   46587 ssh_runner.go:195] Run: systemctl --version
	I0422 11:44:53.615555   46587 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0422 11:44:53.616345   46587 command_runner.go:130] > systemd 252 (252)
	I0422 11:44:53.616378   46587 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0422 11:44:53.616434   46587 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 11:44:53.784536   46587 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0422 11:44:53.799517   46587 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0422 11:44:53.799840   46587 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 11:44:53.799899   46587 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 11:44:53.810413   46587 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0422 11:44:53.810430   46587 start.go:494] detecting cgroup driver to use...
	I0422 11:44:53.810484   46587 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 11:44:53.831515   46587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 11:44:53.847526   46587 docker.go:217] disabling cri-docker service (if available) ...
	I0422 11:44:53.847577   46587 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 11:44:53.863611   46587 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 11:44:53.879312   46587 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 11:44:54.041641   46587 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 11:44:54.183677   46587 docker.go:233] disabling docker service ...
	I0422 11:44:54.183755   46587 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 11:44:54.200976   46587 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 11:44:54.216037   46587 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 11:44:54.361219   46587 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 11:44:54.507567   46587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 11:44:54.522960   46587 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 11:44:54.545728   46587 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0422 11:44:54.545761   46587 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 11:44:54.545810   46587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:44:54.557521   46587 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 11:44:54.557563   46587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:44:54.568658   46587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:44:54.579795   46587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:44:54.590928   46587 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 11:44:54.602678   46587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:44:54.614228   46587 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:44:54.627245   46587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 11:44:54.638478   46587 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 11:44:54.648891   46587 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0422 11:44:54.648990   46587 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 11:44:54.660198   46587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 11:44:54.815253   46587 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 11:44:55.072476   46587 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 11:44:55.072546   46587 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 11:44:55.078017   46587 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0422 11:44:55.078039   46587 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0422 11:44:55.078046   46587 command_runner.go:130] > Device: 0,22	Inode: 1321        Links: 1
	I0422 11:44:55.078052   46587 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0422 11:44:55.078057   46587 command_runner.go:130] > Access: 2024-04-22 11:44:54.946172638 +0000
	I0422 11:44:55.078063   46587 command_runner.go:130] > Modify: 2024-04-22 11:44:54.946172638 +0000
	I0422 11:44:55.078068   46587 command_runner.go:130] > Change: 2024-04-22 11:44:54.946172638 +0000
	I0422 11:44:55.078072   46587 command_runner.go:130] >  Birth: -
	I0422 11:44:55.078202   46587 start.go:562] Will wait 60s for crictl version
	I0422 11:44:55.078261   46587 ssh_runner.go:195] Run: which crictl
	I0422 11:44:55.082469   46587 command_runner.go:130] > /usr/bin/crictl
	I0422 11:44:55.082682   46587 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 11:44:55.127824   46587 command_runner.go:130] > Version:  0.1.0
	I0422 11:44:55.127847   46587 command_runner.go:130] > RuntimeName:  cri-o
	I0422 11:44:55.127852   46587 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0422 11:44:55.127857   46587 command_runner.go:130] > RuntimeApiVersion:  v1
	I0422 11:44:55.128077   46587 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 11:44:55.128150   46587 ssh_runner.go:195] Run: crio --version
	I0422 11:44:55.162596   46587 command_runner.go:130] > crio version 1.29.1
	I0422 11:44:55.162619   46587 command_runner.go:130] > Version:        1.29.1
	I0422 11:44:55.162625   46587 command_runner.go:130] > GitCommit:      unknown
	I0422 11:44:55.162630   46587 command_runner.go:130] > GitCommitDate:  unknown
	I0422 11:44:55.162634   46587 command_runner.go:130] > GitTreeState:   clean
	I0422 11:44:55.162640   46587 command_runner.go:130] > BuildDate:      2024-04-18T23:15:22Z
	I0422 11:44:55.162644   46587 command_runner.go:130] > GoVersion:      go1.21.6
	I0422 11:44:55.162648   46587 command_runner.go:130] > Compiler:       gc
	I0422 11:44:55.162652   46587 command_runner.go:130] > Platform:       linux/amd64
	I0422 11:44:55.162656   46587 command_runner.go:130] > Linkmode:       dynamic
	I0422 11:44:55.162673   46587 command_runner.go:130] > BuildTags:      
	I0422 11:44:55.162677   46587 command_runner.go:130] >   containers_image_ostree_stub
	I0422 11:44:55.162682   46587 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0422 11:44:55.162691   46587 command_runner.go:130] >   btrfs_noversion
	I0422 11:44:55.162695   46587 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0422 11:44:55.162699   46587 command_runner.go:130] >   libdm_no_deferred_remove
	I0422 11:44:55.162703   46587 command_runner.go:130] >   seccomp
	I0422 11:44:55.162708   46587 command_runner.go:130] > LDFlags:          unknown
	I0422 11:44:55.162718   46587 command_runner.go:130] > SeccompEnabled:   true
	I0422 11:44:55.162730   46587 command_runner.go:130] > AppArmorEnabled:  false
	I0422 11:44:55.162809   46587 ssh_runner.go:195] Run: crio --version
	I0422 11:44:55.193470   46587 command_runner.go:130] > crio version 1.29.1
	I0422 11:44:55.193497   46587 command_runner.go:130] > Version:        1.29.1
	I0422 11:44:55.193506   46587 command_runner.go:130] > GitCommit:      unknown
	I0422 11:44:55.193512   46587 command_runner.go:130] > GitCommitDate:  unknown
	I0422 11:44:55.193521   46587 command_runner.go:130] > GitTreeState:   clean
	I0422 11:44:55.193530   46587 command_runner.go:130] > BuildDate:      2024-04-18T23:15:22Z
	I0422 11:44:55.193536   46587 command_runner.go:130] > GoVersion:      go1.21.6
	I0422 11:44:55.193542   46587 command_runner.go:130] > Compiler:       gc
	I0422 11:44:55.193550   46587 command_runner.go:130] > Platform:       linux/amd64
	I0422 11:44:55.193559   46587 command_runner.go:130] > Linkmode:       dynamic
	I0422 11:44:55.193585   46587 command_runner.go:130] > BuildTags:      
	I0422 11:44:55.193595   46587 command_runner.go:130] >   containers_image_ostree_stub
	I0422 11:44:55.193602   46587 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0422 11:44:55.193609   46587 command_runner.go:130] >   btrfs_noversion
	I0422 11:44:55.193619   46587 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0422 11:44:55.193628   46587 command_runner.go:130] >   libdm_no_deferred_remove
	I0422 11:44:55.193635   46587 command_runner.go:130] >   seccomp
	I0422 11:44:55.193644   46587 command_runner.go:130] > LDFlags:          unknown
	I0422 11:44:55.193653   46587 command_runner.go:130] > SeccompEnabled:   true
	I0422 11:44:55.193661   46587 command_runner.go:130] > AppArmorEnabled:  false
	I0422 11:44:55.197104   46587 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 11:44:55.198507   46587 main.go:141] libmachine: (multinode-254635) Calling .GetIP
	I0422 11:44:55.201216   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:44:55.201596   46587 main.go:141] libmachine: (multinode-254635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:1f:f6", ip: ""} in network mk-multinode-254635: {Iface:virbr1 ExpiryTime:2024-04-22 12:37:44 +0000 UTC Type:0 Mac:52:54:00:e2:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-254635 Clientid:01:52:54:00:e2:1f:f6}
	I0422 11:44:55.201625   46587 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined IP address 192.168.39.185 and MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:44:55.201827   46587 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0422 11:44:55.206612   46587 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0422 11:44:55.206932   46587 kubeadm.go:877] updating cluster {Name:multinode-254635 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-254
635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.75 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false isti
o:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 11:44:55.207074   46587 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 11:44:55.207128   46587 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 11:44:55.256756   46587 command_runner.go:130] > {
	I0422 11:44:55.256798   46587 command_runner.go:130] >   "images": [
	I0422 11:44:55.256804   46587 command_runner.go:130] >     {
	I0422 11:44:55.256818   46587 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0422 11:44:55.256825   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.256834   46587 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0422 11:44:55.256842   46587 command_runner.go:130] >       ],
	I0422 11:44:55.256850   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.256871   46587 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0422 11:44:55.256884   46587 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0422 11:44:55.256893   46587 command_runner.go:130] >       ],
	I0422 11:44:55.256899   46587 command_runner.go:130] >       "size": "65291810",
	I0422 11:44:55.256908   46587 command_runner.go:130] >       "uid": null,
	I0422 11:44:55.256916   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.256929   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.256939   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.256944   46587 command_runner.go:130] >     },
	I0422 11:44:55.256953   46587 command_runner.go:130] >     {
	I0422 11:44:55.256962   46587 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0422 11:44:55.256971   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.256979   46587 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0422 11:44:55.256989   46587 command_runner.go:130] >       ],
	I0422 11:44:55.256995   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.257010   46587 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0422 11:44:55.257024   46587 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0422 11:44:55.257032   46587 command_runner.go:130] >       ],
	I0422 11:44:55.257039   46587 command_runner.go:130] >       "size": "1363676",
	I0422 11:44:55.257048   46587 command_runner.go:130] >       "uid": null,
	I0422 11:44:55.257058   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.257067   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.257072   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.257081   46587 command_runner.go:130] >     },
	I0422 11:44:55.257086   46587 command_runner.go:130] >     {
	I0422 11:44:55.257096   46587 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0422 11:44:55.257105   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.257113   46587 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0422 11:44:55.257122   46587 command_runner.go:130] >       ],
	I0422 11:44:55.257132   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.257146   46587 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0422 11:44:55.257161   46587 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0422 11:44:55.257171   46587 command_runner.go:130] >       ],
	I0422 11:44:55.257180   46587 command_runner.go:130] >       "size": "31470524",
	I0422 11:44:55.257192   46587 command_runner.go:130] >       "uid": null,
	I0422 11:44:55.257201   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.257212   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.257221   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.257230   46587 command_runner.go:130] >     },
	I0422 11:44:55.257239   46587 command_runner.go:130] >     {
	I0422 11:44:55.257251   46587 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0422 11:44:55.257284   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.257295   46587 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0422 11:44:55.257303   46587 command_runner.go:130] >       ],
	I0422 11:44:55.257312   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.257323   46587 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0422 11:44:55.257342   46587 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0422 11:44:55.257356   46587 command_runner.go:130] >       ],
	I0422 11:44:55.257363   46587 command_runner.go:130] >       "size": "61245718",
	I0422 11:44:55.257369   46587 command_runner.go:130] >       "uid": null,
	I0422 11:44:55.257379   46587 command_runner.go:130] >       "username": "nonroot",
	I0422 11:44:55.257385   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.257395   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.257403   46587 command_runner.go:130] >     },
	I0422 11:44:55.257411   46587 command_runner.go:130] >     {
	I0422 11:44:55.257422   46587 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0422 11:44:55.257431   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.257442   46587 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0422 11:44:55.257450   46587 command_runner.go:130] >       ],
	I0422 11:44:55.257459   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.257472   46587 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0422 11:44:55.257485   46587 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0422 11:44:55.257495   46587 command_runner.go:130] >       ],
	I0422 11:44:55.257505   46587 command_runner.go:130] >       "size": "150779692",
	I0422 11:44:55.257513   46587 command_runner.go:130] >       "uid": {
	I0422 11:44:55.257518   46587 command_runner.go:130] >         "value": "0"
	I0422 11:44:55.257527   46587 command_runner.go:130] >       },
	I0422 11:44:55.257536   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.257544   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.257551   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.257555   46587 command_runner.go:130] >     },
	I0422 11:44:55.257559   46587 command_runner.go:130] >     {
	I0422 11:44:55.257565   46587 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0422 11:44:55.257572   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.257577   46587 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0422 11:44:55.257583   46587 command_runner.go:130] >       ],
	I0422 11:44:55.257588   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.257597   46587 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0422 11:44:55.257607   46587 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0422 11:44:55.257613   46587 command_runner.go:130] >       ],
	I0422 11:44:55.257617   46587 command_runner.go:130] >       "size": "117609952",
	I0422 11:44:55.257623   46587 command_runner.go:130] >       "uid": {
	I0422 11:44:55.257627   46587 command_runner.go:130] >         "value": "0"
	I0422 11:44:55.257633   46587 command_runner.go:130] >       },
	I0422 11:44:55.257637   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.257640   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.257645   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.257650   46587 command_runner.go:130] >     },
	I0422 11:44:55.257658   46587 command_runner.go:130] >     {
	I0422 11:44:55.257667   46587 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0422 11:44:55.257679   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.257691   46587 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0422 11:44:55.257699   46587 command_runner.go:130] >       ],
	I0422 11:44:55.257706   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.257721   46587 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0422 11:44:55.257736   46587 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0422 11:44:55.257748   46587 command_runner.go:130] >       ],
	I0422 11:44:55.257758   46587 command_runner.go:130] >       "size": "112170310",
	I0422 11:44:55.257766   46587 command_runner.go:130] >       "uid": {
	I0422 11:44:55.257776   46587 command_runner.go:130] >         "value": "0"
	I0422 11:44:55.257785   46587 command_runner.go:130] >       },
	I0422 11:44:55.257794   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.257803   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.257810   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.257814   46587 command_runner.go:130] >     },
	I0422 11:44:55.257820   46587 command_runner.go:130] >     {
	I0422 11:44:55.257828   46587 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0422 11:44:55.257835   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.257839   46587 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0422 11:44:55.257845   46587 command_runner.go:130] >       ],
	I0422 11:44:55.257849   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.257897   46587 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0422 11:44:55.257910   46587 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0422 11:44:55.257914   46587 command_runner.go:130] >       ],
	I0422 11:44:55.257919   46587 command_runner.go:130] >       "size": "85932953",
	I0422 11:44:55.257923   46587 command_runner.go:130] >       "uid": null,
	I0422 11:44:55.257930   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.257934   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.257940   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.257944   46587 command_runner.go:130] >     },
	I0422 11:44:55.257948   46587 command_runner.go:130] >     {
	I0422 11:44:55.257953   46587 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0422 11:44:55.257957   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.257962   46587 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0422 11:44:55.257966   46587 command_runner.go:130] >       ],
	I0422 11:44:55.257969   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.257976   46587 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0422 11:44:55.257983   46587 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0422 11:44:55.257986   46587 command_runner.go:130] >       ],
	I0422 11:44:55.257990   46587 command_runner.go:130] >       "size": "63026502",
	I0422 11:44:55.257994   46587 command_runner.go:130] >       "uid": {
	I0422 11:44:55.257997   46587 command_runner.go:130] >         "value": "0"
	I0422 11:44:55.258001   46587 command_runner.go:130] >       },
	I0422 11:44:55.258005   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.258008   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.258012   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.258015   46587 command_runner.go:130] >     },
	I0422 11:44:55.258018   46587 command_runner.go:130] >     {
	I0422 11:44:55.258024   46587 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0422 11:44:55.258034   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.258038   46587 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0422 11:44:55.258041   46587 command_runner.go:130] >       ],
	I0422 11:44:55.258046   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.258056   46587 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0422 11:44:55.258064   46587 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0422 11:44:55.258070   46587 command_runner.go:130] >       ],
	I0422 11:44:55.258074   46587 command_runner.go:130] >       "size": "750414",
	I0422 11:44:55.258080   46587 command_runner.go:130] >       "uid": {
	I0422 11:44:55.258085   46587 command_runner.go:130] >         "value": "65535"
	I0422 11:44:55.258090   46587 command_runner.go:130] >       },
	I0422 11:44:55.258094   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.258100   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.258105   46587 command_runner.go:130] >       "pinned": true
	I0422 11:44:55.258111   46587 command_runner.go:130] >     }
	I0422 11:44:55.258114   46587 command_runner.go:130] >   ]
	I0422 11:44:55.258118   46587 command_runner.go:130] > }
	I0422 11:44:55.258300   46587 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 11:44:55.258312   46587 crio.go:433] Images already preloaded, skipping extraction
	I0422 11:44:55.258352   46587 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 11:44:55.296509   46587 command_runner.go:130] > {
	I0422 11:44:55.296529   46587 command_runner.go:130] >   "images": [
	I0422 11:44:55.296535   46587 command_runner.go:130] >     {
	I0422 11:44:55.296547   46587 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0422 11:44:55.296553   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.296562   46587 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0422 11:44:55.296567   46587 command_runner.go:130] >       ],
	I0422 11:44:55.296573   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.296585   46587 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0422 11:44:55.296597   46587 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0422 11:44:55.296607   46587 command_runner.go:130] >       ],
	I0422 11:44:55.296615   46587 command_runner.go:130] >       "size": "65291810",
	I0422 11:44:55.296623   46587 command_runner.go:130] >       "uid": null,
	I0422 11:44:55.296632   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.296650   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.296660   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.296666   46587 command_runner.go:130] >     },
	I0422 11:44:55.296673   46587 command_runner.go:130] >     {
	I0422 11:44:55.296688   46587 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0422 11:44:55.296698   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.296709   46587 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0422 11:44:55.296718   46587 command_runner.go:130] >       ],
	I0422 11:44:55.296725   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.296737   46587 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0422 11:44:55.296750   46587 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0422 11:44:55.296759   46587 command_runner.go:130] >       ],
	I0422 11:44:55.296767   46587 command_runner.go:130] >       "size": "1363676",
	I0422 11:44:55.296791   46587 command_runner.go:130] >       "uid": null,
	I0422 11:44:55.296804   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.296813   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.296820   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.296826   46587 command_runner.go:130] >     },
	I0422 11:44:55.296832   46587 command_runner.go:130] >     {
	I0422 11:44:55.296842   46587 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0422 11:44:55.296852   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.296871   46587 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0422 11:44:55.296878   46587 command_runner.go:130] >       ],
	I0422 11:44:55.296889   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.296904   46587 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0422 11:44:55.296920   46587 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0422 11:44:55.296929   46587 command_runner.go:130] >       ],
	I0422 11:44:55.296937   46587 command_runner.go:130] >       "size": "31470524",
	I0422 11:44:55.296947   46587 command_runner.go:130] >       "uid": null,
	I0422 11:44:55.296956   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.296966   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.296975   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.296983   46587 command_runner.go:130] >     },
	I0422 11:44:55.296990   46587 command_runner.go:130] >     {
	I0422 11:44:55.297003   46587 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0422 11:44:55.297011   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.297023   46587 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0422 11:44:55.297032   46587 command_runner.go:130] >       ],
	I0422 11:44:55.297040   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.297056   46587 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0422 11:44:55.297077   46587 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0422 11:44:55.297086   46587 command_runner.go:130] >       ],
	I0422 11:44:55.297094   46587 command_runner.go:130] >       "size": "61245718",
	I0422 11:44:55.297100   46587 command_runner.go:130] >       "uid": null,
	I0422 11:44:55.297110   46587 command_runner.go:130] >       "username": "nonroot",
	I0422 11:44:55.297122   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.297132   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.297140   46587 command_runner.go:130] >     },
	I0422 11:44:55.297149   46587 command_runner.go:130] >     {
	I0422 11:44:55.297160   46587 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0422 11:44:55.297171   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.297182   46587 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0422 11:44:55.297188   46587 command_runner.go:130] >       ],
	I0422 11:44:55.297198   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.297212   46587 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0422 11:44:55.297227   46587 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0422 11:44:55.297235   46587 command_runner.go:130] >       ],
	I0422 11:44:55.297243   46587 command_runner.go:130] >       "size": "150779692",
	I0422 11:44:55.297252   46587 command_runner.go:130] >       "uid": {
	I0422 11:44:55.297260   46587 command_runner.go:130] >         "value": "0"
	I0422 11:44:55.297268   46587 command_runner.go:130] >       },
	I0422 11:44:55.297276   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.297286   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.297295   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.297301   46587 command_runner.go:130] >     },
	I0422 11:44:55.297311   46587 command_runner.go:130] >     {
	I0422 11:44:55.297322   46587 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0422 11:44:55.297338   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.297351   46587 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0422 11:44:55.297360   46587 command_runner.go:130] >       ],
	I0422 11:44:55.297367   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.297383   46587 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0422 11:44:55.297399   46587 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0422 11:44:55.297409   46587 command_runner.go:130] >       ],
	I0422 11:44:55.297417   46587 command_runner.go:130] >       "size": "117609952",
	I0422 11:44:55.297428   46587 command_runner.go:130] >       "uid": {
	I0422 11:44:55.297437   46587 command_runner.go:130] >         "value": "0"
	I0422 11:44:55.297445   46587 command_runner.go:130] >       },
	I0422 11:44:55.297453   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.297461   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.297469   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.297477   46587 command_runner.go:130] >     },
	I0422 11:44:55.297483   46587 command_runner.go:130] >     {
	I0422 11:44:55.297494   46587 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0422 11:44:55.297505   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.297518   46587 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0422 11:44:55.297527   46587 command_runner.go:130] >       ],
	I0422 11:44:55.297534   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.297551   46587 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0422 11:44:55.297567   46587 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0422 11:44:55.297579   46587 command_runner.go:130] >       ],
	I0422 11:44:55.297591   46587 command_runner.go:130] >       "size": "112170310",
	I0422 11:44:55.297598   46587 command_runner.go:130] >       "uid": {
	I0422 11:44:55.297606   46587 command_runner.go:130] >         "value": "0"
	I0422 11:44:55.297616   46587 command_runner.go:130] >       },
	I0422 11:44:55.297624   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.297634   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.297643   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.297652   46587 command_runner.go:130] >     },
	I0422 11:44:55.297660   46587 command_runner.go:130] >     {
	I0422 11:44:55.297673   46587 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0422 11:44:55.297684   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.297695   46587 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0422 11:44:55.297702   46587 command_runner.go:130] >       ],
	I0422 11:44:55.297711   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.297732   46587 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0422 11:44:55.297749   46587 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0422 11:44:55.297755   46587 command_runner.go:130] >       ],
	I0422 11:44:55.297765   46587 command_runner.go:130] >       "size": "85932953",
	I0422 11:44:55.297773   46587 command_runner.go:130] >       "uid": null,
	I0422 11:44:55.297783   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.297791   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.297800   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.297806   46587 command_runner.go:130] >     },
	I0422 11:44:55.297815   46587 command_runner.go:130] >     {
	I0422 11:44:55.297827   46587 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0422 11:44:55.297836   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.297845   46587 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0422 11:44:55.297854   46587 command_runner.go:130] >       ],
	I0422 11:44:55.297861   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.297876   46587 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0422 11:44:55.297892   46587 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0422 11:44:55.297901   46587 command_runner.go:130] >       ],
	I0422 11:44:55.297909   46587 command_runner.go:130] >       "size": "63026502",
	I0422 11:44:55.297919   46587 command_runner.go:130] >       "uid": {
	I0422 11:44:55.297927   46587 command_runner.go:130] >         "value": "0"
	I0422 11:44:55.297933   46587 command_runner.go:130] >       },
	I0422 11:44:55.297942   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.297949   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.297956   46587 command_runner.go:130] >       "pinned": false
	I0422 11:44:55.297965   46587 command_runner.go:130] >     },
	I0422 11:44:55.297973   46587 command_runner.go:130] >     {
	I0422 11:44:55.297984   46587 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0422 11:44:55.297993   46587 command_runner.go:130] >       "repoTags": [
	I0422 11:44:55.298002   46587 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0422 11:44:55.298010   46587 command_runner.go:130] >       ],
	I0422 11:44:55.298018   46587 command_runner.go:130] >       "repoDigests": [
	I0422 11:44:55.298033   46587 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0422 11:44:55.298052   46587 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0422 11:44:55.298060   46587 command_runner.go:130] >       ],
	I0422 11:44:55.298068   46587 command_runner.go:130] >       "size": "750414",
	I0422 11:44:55.298077   46587 command_runner.go:130] >       "uid": {
	I0422 11:44:55.298084   46587 command_runner.go:130] >         "value": "65535"
	I0422 11:44:55.298092   46587 command_runner.go:130] >       },
	I0422 11:44:55.298099   46587 command_runner.go:130] >       "username": "",
	I0422 11:44:55.298109   46587 command_runner.go:130] >       "spec": null,
	I0422 11:44:55.298120   46587 command_runner.go:130] >       "pinned": true
	I0422 11:44:55.298128   46587 command_runner.go:130] >     }
	I0422 11:44:55.298138   46587 command_runner.go:130] >   ]
	I0422 11:44:55.298145   46587 command_runner.go:130] > }
	I0422 11:44:55.298259   46587 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 11:44:55.298270   46587 cache_images.go:84] Images are preloaded, skipping loading
	I0422 11:44:55.298281   46587 kubeadm.go:928] updating node { 192.168.39.185 8443 v1.30.0 crio true true} ...
	I0422 11:44:55.298407   46587 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-254635 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-254635 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 11:44:55.298484   46587 ssh_runner.go:195] Run: crio config
	I0422 11:44:55.343488   46587 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0422 11:44:55.343518   46587 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0422 11:44:55.343528   46587 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0422 11:44:55.343532   46587 command_runner.go:130] > #
	I0422 11:44:55.343541   46587 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0422 11:44:55.343550   46587 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0422 11:44:55.343558   46587 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0422 11:44:55.343568   46587 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0422 11:44:55.343576   46587 command_runner.go:130] > # reload'.
	I0422 11:44:55.343591   46587 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0422 11:44:55.343602   46587 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0422 11:44:55.343616   46587 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0422 11:44:55.343627   46587 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0422 11:44:55.343639   46587 command_runner.go:130] > [crio]
	I0422 11:44:55.343650   46587 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0422 11:44:55.343660   46587 command_runner.go:130] > # containers images, in this directory.
	I0422 11:44:55.343693   46587 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0422 11:44:55.343729   46587 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0422 11:44:55.343998   46587 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0422 11:44:55.344014   46587 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0422 11:44:55.344265   46587 command_runner.go:130] > # imagestore = ""
	I0422 11:44:55.344283   46587 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0422 11:44:55.344297   46587 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0422 11:44:55.344431   46587 command_runner.go:130] > storage_driver = "overlay"
	I0422 11:44:55.344445   46587 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0422 11:44:55.344454   46587 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0422 11:44:55.344460   46587 command_runner.go:130] > storage_option = [
	I0422 11:44:55.344602   46587 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0422 11:44:55.344680   46587 command_runner.go:130] > ]
	I0422 11:44:55.344694   46587 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0422 11:44:55.344704   46587 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0422 11:44:55.345120   46587 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0422 11:44:55.345137   46587 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0422 11:44:55.345147   46587 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0422 11:44:55.345154   46587 command_runner.go:130] > # always happen on a node reboot
	I0422 11:44:55.345432   46587 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0422 11:44:55.345451   46587 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0422 11:44:55.345461   46587 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0422 11:44:55.345472   46587 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0422 11:44:55.345591   46587 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0422 11:44:55.345606   46587 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0422 11:44:55.345619   46587 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0422 11:44:55.346024   46587 command_runner.go:130] > # internal_wipe = true
	I0422 11:44:55.346040   46587 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0422 11:44:55.346048   46587 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0422 11:44:55.346406   46587 command_runner.go:130] > # internal_repair = false
	I0422 11:44:55.346426   46587 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0422 11:44:55.346437   46587 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0422 11:44:55.346445   46587 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0422 11:44:55.346698   46587 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0422 11:44:55.346719   46587 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0422 11:44:55.346726   46587 command_runner.go:130] > [crio.api]
	I0422 11:44:55.346735   46587 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0422 11:44:55.347091   46587 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0422 11:44:55.347117   46587 command_runner.go:130] > # IP address on which the stream server will listen.
	I0422 11:44:55.347398   46587 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0422 11:44:55.347414   46587 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0422 11:44:55.347422   46587 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0422 11:44:55.347862   46587 command_runner.go:130] > # stream_port = "0"
	I0422 11:44:55.347875   46587 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0422 11:44:55.348174   46587 command_runner.go:130] > # stream_enable_tls = false
	I0422 11:44:55.348189   46587 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0422 11:44:55.348428   46587 command_runner.go:130] > # stream_idle_timeout = ""
	I0422 11:44:55.348443   46587 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0422 11:44:55.348454   46587 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0422 11:44:55.348462   46587 command_runner.go:130] > # minutes.
	I0422 11:44:55.348681   46587 command_runner.go:130] > # stream_tls_cert = ""
	I0422 11:44:55.348695   46587 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0422 11:44:55.348705   46587 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0422 11:44:55.349035   46587 command_runner.go:130] > # stream_tls_key = ""
	I0422 11:44:55.349050   46587 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0422 11:44:55.349061   46587 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0422 11:44:55.349078   46587 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0422 11:44:55.349263   46587 command_runner.go:130] > # stream_tls_ca = ""
	I0422 11:44:55.349280   46587 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0422 11:44:55.349474   46587 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0422 11:44:55.349495   46587 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0422 11:44:55.349551   46587 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0422 11:44:55.349567   46587 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0422 11:44:55.349577   46587 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0422 11:44:55.349587   46587 command_runner.go:130] > [crio.runtime]
	I0422 11:44:55.349599   46587 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0422 11:44:55.349611   46587 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0422 11:44:55.349619   46587 command_runner.go:130] > # "nofile=1024:2048"
	I0422 11:44:55.349635   46587 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0422 11:44:55.349882   46587 command_runner.go:130] > # default_ulimits = [
	I0422 11:44:55.350157   46587 command_runner.go:130] > # ]
	I0422 11:44:55.350168   46587 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0422 11:44:55.351733   46587 command_runner.go:130] > # no_pivot = false
	I0422 11:44:55.351749   46587 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0422 11:44:55.351759   46587 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0422 11:44:55.351770   46587 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0422 11:44:55.351779   46587 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0422 11:44:55.351804   46587 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0422 11:44:55.351817   46587 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0422 11:44:55.351825   46587 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0422 11:44:55.351832   46587 command_runner.go:130] > # Cgroup setting for conmon
	I0422 11:44:55.351844   46587 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0422 11:44:55.351854   46587 command_runner.go:130] > conmon_cgroup = "pod"
	I0422 11:44:55.351868   46587 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0422 11:44:55.351879   46587 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0422 11:44:55.351892   46587 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0422 11:44:55.351901   46587 command_runner.go:130] > conmon_env = [
	I0422 11:44:55.351914   46587 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0422 11:44:55.351922   46587 command_runner.go:130] > ]
	I0422 11:44:55.351934   46587 command_runner.go:130] > # Additional environment variables to set for all the
	I0422 11:44:55.351944   46587 command_runner.go:130] > # containers. These are overridden if set in the
	I0422 11:44:55.351956   46587 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0422 11:44:55.351966   46587 command_runner.go:130] > # default_env = [
	I0422 11:44:55.351974   46587 command_runner.go:130] > # ]
	I0422 11:44:55.351983   46587 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0422 11:44:55.352003   46587 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0422 11:44:55.352009   46587 command_runner.go:130] > # selinux = false
	I0422 11:44:55.352020   46587 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0422 11:44:55.352029   46587 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0422 11:44:55.352038   46587 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0422 11:44:55.352045   46587 command_runner.go:130] > # seccomp_profile = ""
	I0422 11:44:55.352053   46587 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0422 11:44:55.352063   46587 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0422 11:44:55.352076   46587 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0422 11:44:55.352086   46587 command_runner.go:130] > # which might increase security.
	I0422 11:44:55.352092   46587 command_runner.go:130] > # This option is currently deprecated,
	I0422 11:44:55.352103   46587 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0422 11:44:55.352113   46587 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0422 11:44:55.352124   46587 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0422 11:44:55.352135   46587 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0422 11:44:55.352147   46587 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0422 11:44:55.352159   46587 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0422 11:44:55.352170   46587 command_runner.go:130] > # This option supports live configuration reload.
	I0422 11:44:55.352185   46587 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0422 11:44:55.352199   46587 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0422 11:44:55.352209   46587 command_runner.go:130] > # the cgroup blockio controller.
	I0422 11:44:55.352220   46587 command_runner.go:130] > # blockio_config_file = ""
	I0422 11:44:55.352233   46587 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0422 11:44:55.352243   46587 command_runner.go:130] > # blockio parameters.
	I0422 11:44:55.352254   46587 command_runner.go:130] > # blockio_reload = false
	I0422 11:44:55.352268   46587 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0422 11:44:55.352276   46587 command_runner.go:130] > # irqbalance daemon.
	I0422 11:44:55.352287   46587 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0422 11:44:55.352298   46587 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0422 11:44:55.352311   46587 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0422 11:44:55.352326   46587 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0422 11:44:55.352337   46587 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0422 11:44:55.352345   46587 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0422 11:44:55.352355   46587 command_runner.go:130] > # This option supports live configuration reload.
	I0422 11:44:55.352364   46587 command_runner.go:130] > # rdt_config_file = ""
	I0422 11:44:55.352375   46587 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0422 11:44:55.352384   46587 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0422 11:44:55.352405   46587 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0422 11:44:55.352414   46587 command_runner.go:130] > # separate_pull_cgroup = ""
	I0422 11:44:55.352433   46587 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0422 11:44:55.352446   46587 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0422 11:44:55.352455   46587 command_runner.go:130] > # will be added.
	I0422 11:44:55.352465   46587 command_runner.go:130] > # default_capabilities = [
	I0422 11:44:55.352474   46587 command_runner.go:130] > # 	"CHOWN",
	I0422 11:44:55.352483   46587 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0422 11:44:55.352491   46587 command_runner.go:130] > # 	"FSETID",
	I0422 11:44:55.352498   46587 command_runner.go:130] > # 	"FOWNER",
	I0422 11:44:55.352506   46587 command_runner.go:130] > # 	"SETGID",
	I0422 11:44:55.352511   46587 command_runner.go:130] > # 	"SETUID",
	I0422 11:44:55.352518   46587 command_runner.go:130] > # 	"SETPCAP",
	I0422 11:44:55.352525   46587 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0422 11:44:55.352533   46587 command_runner.go:130] > # 	"KILL",
	I0422 11:44:55.352538   46587 command_runner.go:130] > # ]
	I0422 11:44:55.352551   46587 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0422 11:44:55.352566   46587 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0422 11:44:55.352577   46587 command_runner.go:130] > # add_inheritable_capabilities = false
	I0422 11:44:55.352589   46587 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0422 11:44:55.352600   46587 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0422 11:44:55.352609   46587 command_runner.go:130] > default_sysctls = [
	I0422 11:44:55.352622   46587 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0422 11:44:55.352630   46587 command_runner.go:130] > ]
	I0422 11:44:55.352642   46587 command_runner.go:130] > # List of devices on the host that a
	I0422 11:44:55.352653   46587 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0422 11:44:55.352662   46587 command_runner.go:130] > # allowed_devices = [
	I0422 11:44:55.352670   46587 command_runner.go:130] > # 	"/dev/fuse",
	I0422 11:44:55.352678   46587 command_runner.go:130] > # ]
	I0422 11:44:55.352686   46587 command_runner.go:130] > # List of additional devices. specified as
	I0422 11:44:55.352700   46587 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0422 11:44:55.352712   46587 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0422 11:44:55.352723   46587 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0422 11:44:55.352732   46587 command_runner.go:130] > # additional_devices = [
	I0422 11:44:55.352739   46587 command_runner.go:130] > # ]
	I0422 11:44:55.352746   46587 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0422 11:44:55.352754   46587 command_runner.go:130] > # cdi_spec_dirs = [
	I0422 11:44:55.352761   46587 command_runner.go:130] > # 	"/etc/cdi",
	I0422 11:44:55.352779   46587 command_runner.go:130] > # 	"/var/run/cdi",
	I0422 11:44:55.352785   46587 command_runner.go:130] > # ]
	I0422 11:44:55.352795   46587 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0422 11:44:55.352808   46587 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0422 11:44:55.352816   46587 command_runner.go:130] > # Defaults to false.
	I0422 11:44:55.352825   46587 command_runner.go:130] > # device_ownership_from_security_context = false
	I0422 11:44:55.352837   46587 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0422 11:44:55.352849   46587 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0422 11:44:55.352858   46587 command_runner.go:130] > # hooks_dir = [
	I0422 11:44:55.352869   46587 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0422 11:44:55.352877   46587 command_runner.go:130] > # ]
	I0422 11:44:55.352889   46587 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0422 11:44:55.352901   46587 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0422 11:44:55.352910   46587 command_runner.go:130] > # its default mounts from the following two files:
	I0422 11:44:55.352917   46587 command_runner.go:130] > #
	I0422 11:44:55.352925   46587 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0422 11:44:55.352939   46587 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0422 11:44:55.352950   46587 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0422 11:44:55.352958   46587 command_runner.go:130] > #
	I0422 11:44:55.352971   46587 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0422 11:44:55.352984   46587 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0422 11:44:55.352997   46587 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0422 11:44:55.353008   46587 command_runner.go:130] > #      only add mounts it finds in this file.
	I0422 11:44:55.353016   46587 command_runner.go:130] > #
	I0422 11:44:55.353022   46587 command_runner.go:130] > # default_mounts_file = ""
	I0422 11:44:55.353033   46587 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0422 11:44:55.353050   46587 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0422 11:44:55.353059   46587 command_runner.go:130] > pids_limit = 1024
	I0422 11:44:55.353069   46587 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0422 11:44:55.353082   46587 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0422 11:44:55.353095   46587 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0422 11:44:55.353109   46587 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0422 11:44:55.353117   46587 command_runner.go:130] > # log_size_max = -1
	I0422 11:44:55.353129   46587 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0422 11:44:55.353138   46587 command_runner.go:130] > # log_to_journald = false
	I0422 11:44:55.353150   46587 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0422 11:44:55.353159   46587 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0422 11:44:55.353170   46587 command_runner.go:130] > # Path to directory for container attach sockets.
	I0422 11:44:55.353180   46587 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0422 11:44:55.353191   46587 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0422 11:44:55.353199   46587 command_runner.go:130] > # bind_mount_prefix = ""
	I0422 11:44:55.353211   46587 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0422 11:44:55.353219   46587 command_runner.go:130] > # read_only = false
	I0422 11:44:55.353232   46587 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0422 11:44:55.353245   46587 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0422 11:44:55.353254   46587 command_runner.go:130] > # live configuration reload.
	I0422 11:44:55.353263   46587 command_runner.go:130] > # log_level = "info"
	I0422 11:44:55.353274   46587 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0422 11:44:55.353284   46587 command_runner.go:130] > # This option supports live configuration reload.
	I0422 11:44:55.353293   46587 command_runner.go:130] > # log_filter = ""
	I0422 11:44:55.353305   46587 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0422 11:44:55.353317   46587 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0422 11:44:55.353326   46587 command_runner.go:130] > # separated by comma.
	I0422 11:44:55.353333   46587 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0422 11:44:55.353340   46587 command_runner.go:130] > # uid_mappings = ""
	I0422 11:44:55.353346   46587 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0422 11:44:55.353355   46587 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0422 11:44:55.353361   46587 command_runner.go:130] > # separated by comma.
	I0422 11:44:55.353368   46587 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0422 11:44:55.353374   46587 command_runner.go:130] > # gid_mappings = ""
	I0422 11:44:55.353380   46587 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0422 11:44:55.353389   46587 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0422 11:44:55.353402   46587 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0422 11:44:55.353412   46587 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0422 11:44:55.353418   46587 command_runner.go:130] > # minimum_mappable_uid = -1
	I0422 11:44:55.353429   46587 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0422 11:44:55.353437   46587 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0422 11:44:55.353443   46587 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0422 11:44:55.353452   46587 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0422 11:44:55.353456   46587 command_runner.go:130] > # minimum_mappable_gid = -1
	I0422 11:44:55.353462   46587 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0422 11:44:55.353471   46587 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0422 11:44:55.353483   46587 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0422 11:44:55.353492   46587 command_runner.go:130] > # ctr_stop_timeout = 30
	I0422 11:44:55.353501   46587 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0422 11:44:55.353513   46587 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0422 11:44:55.353522   46587 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0422 11:44:55.353526   46587 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0422 11:44:55.353530   46587 command_runner.go:130] > drop_infra_ctr = false
	I0422 11:44:55.353542   46587 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0422 11:44:55.353553   46587 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0422 11:44:55.353567   46587 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0422 11:44:55.353576   46587 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0422 11:44:55.353589   46587 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0422 11:44:55.353602   46587 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0422 11:44:55.353613   46587 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0422 11:44:55.353622   46587 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0422 11:44:55.353631   46587 command_runner.go:130] > # shared_cpuset = ""
	I0422 11:44:55.353644   46587 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0422 11:44:55.353654   46587 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0422 11:44:55.353663   46587 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0422 11:44:55.353677   46587 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0422 11:44:55.353687   46587 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0422 11:44:55.353699   46587 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0422 11:44:55.353712   46587 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0422 11:44:55.353721   46587 command_runner.go:130] > # enable_criu_support = false
	I0422 11:44:55.353732   46587 command_runner.go:130] > # Enable/disable the generation of the container,
	I0422 11:44:55.353744   46587 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0422 11:44:55.353758   46587 command_runner.go:130] > # enable_pod_events = false
	I0422 11:44:55.353772   46587 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0422 11:44:55.353784   46587 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0422 11:44:55.353794   46587 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0422 11:44:55.353802   46587 command_runner.go:130] > # default_runtime = "runc"
	I0422 11:44:55.353807   46587 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0422 11:44:55.353817   46587 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0422 11:44:55.353827   46587 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0422 11:44:55.353834   46587 command_runner.go:130] > # creation as a file is not desired either.
	I0422 11:44:55.353845   46587 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0422 11:44:55.353852   46587 command_runner.go:130] > # the hostname is being managed dynamically.
	I0422 11:44:55.353857   46587 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0422 11:44:55.353862   46587 command_runner.go:130] > # ]
	I0422 11:44:55.353868   46587 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0422 11:44:55.353876   46587 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0422 11:44:55.353884   46587 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0422 11:44:55.353889   46587 command_runner.go:130] > # Each entry in the table should follow the format:
	I0422 11:44:55.353894   46587 command_runner.go:130] > #
	I0422 11:44:55.353899   46587 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0422 11:44:55.353907   46587 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0422 11:44:55.353929   46587 command_runner.go:130] > # runtime_type = "oci"
	I0422 11:44:55.353936   46587 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0422 11:44:55.353940   46587 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0422 11:44:55.353946   46587 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0422 11:44:55.353951   46587 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0422 11:44:55.353957   46587 command_runner.go:130] > # monitor_env = []
	I0422 11:44:55.353962   46587 command_runner.go:130] > # privileged_without_host_devices = false
	I0422 11:44:55.353968   46587 command_runner.go:130] > # allowed_annotations = []
	I0422 11:44:55.353975   46587 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0422 11:44:55.353981   46587 command_runner.go:130] > # Where:
	I0422 11:44:55.353986   46587 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0422 11:44:55.353994   46587 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0422 11:44:55.354002   46587 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0422 11:44:55.354011   46587 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0422 11:44:55.354016   46587 command_runner.go:130] > #   in $PATH.
	I0422 11:44:55.354022   46587 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0422 11:44:55.354029   46587 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0422 11:44:55.354037   46587 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0422 11:44:55.354043   46587 command_runner.go:130] > #   state.
	I0422 11:44:55.354050   46587 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0422 11:44:55.354057   46587 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0422 11:44:55.354063   46587 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0422 11:44:55.354070   46587 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0422 11:44:55.354076   46587 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0422 11:44:55.354084   46587 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0422 11:44:55.354091   46587 command_runner.go:130] > #   The currently recognized values are:
	I0422 11:44:55.354099   46587 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0422 11:44:55.354107   46587 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0422 11:44:55.354115   46587 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0422 11:44:55.354121   46587 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0422 11:44:55.354131   46587 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0422 11:44:55.354139   46587 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0422 11:44:55.354147   46587 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0422 11:44:55.354157   46587 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0422 11:44:55.354165   46587 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0422 11:44:55.354172   46587 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0422 11:44:55.354177   46587 command_runner.go:130] > #   deprecated option "conmon".
	I0422 11:44:55.354186   46587 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0422 11:44:55.354192   46587 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0422 11:44:55.354201   46587 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0422 11:44:55.354208   46587 command_runner.go:130] > #   should be moved to the container's cgroup
	I0422 11:44:55.354215   46587 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0422 11:44:55.354222   46587 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0422 11:44:55.354229   46587 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0422 11:44:55.354237   46587 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0422 11:44:55.354242   46587 command_runner.go:130] > #
	I0422 11:44:55.354246   46587 command_runner.go:130] > # Using the seccomp notifier feature:
	I0422 11:44:55.354252   46587 command_runner.go:130] > #
	I0422 11:44:55.354257   46587 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0422 11:44:55.354265   46587 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0422 11:44:55.354271   46587 command_runner.go:130] > #
	I0422 11:44:55.354279   46587 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0422 11:44:55.354287   46587 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0422 11:44:55.354293   46587 command_runner.go:130] > #
	I0422 11:44:55.354299   46587 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0422 11:44:55.354304   46587 command_runner.go:130] > # feature.
	I0422 11:44:55.354307   46587 command_runner.go:130] > #
	I0422 11:44:55.354312   46587 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0422 11:44:55.354320   46587 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0422 11:44:55.354326   46587 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0422 11:44:55.354335   46587 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0422 11:44:55.354341   46587 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0422 11:44:55.354346   46587 command_runner.go:130] > #
	I0422 11:44:55.354352   46587 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0422 11:44:55.354360   46587 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0422 11:44:55.354363   46587 command_runner.go:130] > #
	I0422 11:44:55.354370   46587 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0422 11:44:55.354377   46587 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0422 11:44:55.354383   46587 command_runner.go:130] > #
	I0422 11:44:55.354390   46587 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0422 11:44:55.354397   46587 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0422 11:44:55.354401   46587 command_runner.go:130] > # limitation.
	I0422 11:44:55.354406   46587 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0422 11:44:55.354410   46587 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0422 11:44:55.354416   46587 command_runner.go:130] > runtime_type = "oci"
	I0422 11:44:55.354420   46587 command_runner.go:130] > runtime_root = "/run/runc"
	I0422 11:44:55.354429   46587 command_runner.go:130] > runtime_config_path = ""
	I0422 11:44:55.354434   46587 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0422 11:44:55.354441   46587 command_runner.go:130] > monitor_cgroup = "pod"
	I0422 11:44:55.354445   46587 command_runner.go:130] > monitor_exec_cgroup = ""
	I0422 11:44:55.354452   46587 command_runner.go:130] > monitor_env = [
	I0422 11:44:55.354457   46587 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0422 11:44:55.354462   46587 command_runner.go:130] > ]
	I0422 11:44:55.354467   46587 command_runner.go:130] > privileged_without_host_devices = false
	I0422 11:44:55.354476   46587 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0422 11:44:55.354483   46587 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0422 11:44:55.354488   46587 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0422 11:44:55.354498   46587 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0422 11:44:55.354507   46587 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0422 11:44:55.354515   46587 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0422 11:44:55.354528   46587 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0422 11:44:55.354538   46587 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0422 11:44:55.354546   46587 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0422 11:44:55.354554   46587 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0422 11:44:55.354560   46587 command_runner.go:130] > # Example:
	I0422 11:44:55.354564   46587 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0422 11:44:55.354571   46587 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0422 11:44:55.354576   46587 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0422 11:44:55.354581   46587 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0422 11:44:55.354587   46587 command_runner.go:130] > # cpuset = 0
	I0422 11:44:55.354591   46587 command_runner.go:130] > # cpushares = "0-1"
	I0422 11:44:55.354596   46587 command_runner.go:130] > # Where:
	I0422 11:44:55.354600   46587 command_runner.go:130] > # The workload name is workload-type.
	I0422 11:44:55.354609   46587 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0422 11:44:55.354616   46587 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0422 11:44:55.354624   46587 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0422 11:44:55.354631   46587 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0422 11:44:55.354639   46587 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0422 11:44:55.354646   46587 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0422 11:44:55.354655   46587 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0422 11:44:55.354665   46587 command_runner.go:130] > # Default value is set to true
	I0422 11:44:55.354675   46587 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0422 11:44:55.354686   46587 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0422 11:44:55.354698   46587 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0422 11:44:55.354709   46587 command_runner.go:130] > # Default value is set to 'false'
	I0422 11:44:55.354720   46587 command_runner.go:130] > # disable_hostport_mapping = false
	I0422 11:44:55.354732   46587 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0422 11:44:55.354740   46587 command_runner.go:130] > #
	I0422 11:44:55.354751   46587 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0422 11:44:55.354762   46587 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0422 11:44:55.354774   46587 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0422 11:44:55.354787   46587 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0422 11:44:55.354796   46587 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0422 11:44:55.354800   46587 command_runner.go:130] > [crio.image]
	I0422 11:44:55.354809   46587 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0422 11:44:55.354816   46587 command_runner.go:130] > # default_transport = "docker://"
	I0422 11:44:55.354829   46587 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0422 11:44:55.354837   46587 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0422 11:44:55.354841   46587 command_runner.go:130] > # global_auth_file = ""
	I0422 11:44:55.354846   46587 command_runner.go:130] > # The image used to instantiate infra containers.
	I0422 11:44:55.354850   46587 command_runner.go:130] > # This option supports live configuration reload.
	I0422 11:44:55.354857   46587 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0422 11:44:55.354863   46587 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0422 11:44:55.354868   46587 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0422 11:44:55.354873   46587 command_runner.go:130] > # This option supports live configuration reload.
	I0422 11:44:55.354877   46587 command_runner.go:130] > # pause_image_auth_file = ""
	I0422 11:44:55.354882   46587 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0422 11:44:55.354887   46587 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0422 11:44:55.354893   46587 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0422 11:44:55.354898   46587 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0422 11:44:55.354902   46587 command_runner.go:130] > # pause_command = "/pause"
	I0422 11:44:55.354907   46587 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0422 11:44:55.354912   46587 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0422 11:44:55.354918   46587 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0422 11:44:55.354923   46587 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0422 11:44:55.354929   46587 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0422 11:44:55.354934   46587 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0422 11:44:55.354938   46587 command_runner.go:130] > # pinned_images = [
	I0422 11:44:55.354941   46587 command_runner.go:130] > # ]
	I0422 11:44:55.354946   46587 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0422 11:44:55.354953   46587 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0422 11:44:55.354958   46587 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0422 11:44:55.354964   46587 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0422 11:44:55.354972   46587 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0422 11:44:55.354976   46587 command_runner.go:130] > # signature_policy = ""
	I0422 11:44:55.354984   46587 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0422 11:44:55.354990   46587 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0422 11:44:55.354998   46587 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0422 11:44:55.355005   46587 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0422 11:44:55.355012   46587 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0422 11:44:55.355019   46587 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0422 11:44:55.355025   46587 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0422 11:44:55.355034   46587 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0422 11:44:55.355041   46587 command_runner.go:130] > # changing them here.
	I0422 11:44:55.355045   46587 command_runner.go:130] > # insecure_registries = [
	I0422 11:44:55.355050   46587 command_runner.go:130] > # ]
	I0422 11:44:55.355056   46587 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0422 11:44:55.355064   46587 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0422 11:44:55.355070   46587 command_runner.go:130] > # image_volumes = "mkdir"
	I0422 11:44:55.355076   46587 command_runner.go:130] > # Temporary directory to use for storing big files
	I0422 11:44:55.355082   46587 command_runner.go:130] > # big_files_temporary_dir = ""
	I0422 11:44:55.355088   46587 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0422 11:44:55.355093   46587 command_runner.go:130] > # CNI plugins.
	I0422 11:44:55.355098   46587 command_runner.go:130] > [crio.network]
	I0422 11:44:55.355105   46587 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0422 11:44:55.355110   46587 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0422 11:44:55.355117   46587 command_runner.go:130] > # cni_default_network = ""
	I0422 11:44:55.355122   46587 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0422 11:44:55.355128   46587 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0422 11:44:55.355133   46587 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0422 11:44:55.355139   46587 command_runner.go:130] > # plugin_dirs = [
	I0422 11:44:55.355143   46587 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0422 11:44:55.355149   46587 command_runner.go:130] > # ]
	I0422 11:44:55.355155   46587 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0422 11:44:55.355161   46587 command_runner.go:130] > [crio.metrics]
	I0422 11:44:55.355166   46587 command_runner.go:130] > # Globally enable or disable metrics support.
	I0422 11:44:55.355173   46587 command_runner.go:130] > enable_metrics = true
	I0422 11:44:55.355177   46587 command_runner.go:130] > # Specify enabled metrics collectors.
	I0422 11:44:55.355184   46587 command_runner.go:130] > # Per default all metrics are enabled.
	I0422 11:44:55.355189   46587 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0422 11:44:55.355198   46587 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0422 11:44:55.355205   46587 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0422 11:44:55.355210   46587 command_runner.go:130] > # metrics_collectors = [
	I0422 11:44:55.355214   46587 command_runner.go:130] > # 	"operations",
	I0422 11:44:55.355221   46587 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0422 11:44:55.355225   46587 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0422 11:44:55.355231   46587 command_runner.go:130] > # 	"operations_errors",
	I0422 11:44:55.355235   46587 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0422 11:44:55.355242   46587 command_runner.go:130] > # 	"image_pulls_by_name",
	I0422 11:44:55.355247   46587 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0422 11:44:55.355251   46587 command_runner.go:130] > # 	"image_pulls_failures",
	I0422 11:44:55.355255   46587 command_runner.go:130] > # 	"image_pulls_successes",
	I0422 11:44:55.355259   46587 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0422 11:44:55.355263   46587 command_runner.go:130] > # 	"image_layer_reuse",
	I0422 11:44:55.355270   46587 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0422 11:44:55.355276   46587 command_runner.go:130] > # 	"containers_oom_total",
	I0422 11:44:55.355283   46587 command_runner.go:130] > # 	"containers_oom",
	I0422 11:44:55.355287   46587 command_runner.go:130] > # 	"processes_defunct",
	I0422 11:44:55.355293   46587 command_runner.go:130] > # 	"operations_total",
	I0422 11:44:55.355297   46587 command_runner.go:130] > # 	"operations_latency_seconds",
	I0422 11:44:55.355304   46587 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0422 11:44:55.355308   46587 command_runner.go:130] > # 	"operations_errors_total",
	I0422 11:44:55.355315   46587 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0422 11:44:55.355319   46587 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0422 11:44:55.355325   46587 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0422 11:44:55.355329   46587 command_runner.go:130] > # 	"image_pulls_success_total",
	I0422 11:44:55.355333   46587 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0422 11:44:55.355337   46587 command_runner.go:130] > # 	"containers_oom_count_total",
	I0422 11:44:55.355344   46587 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0422 11:44:55.355348   46587 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0422 11:44:55.355354   46587 command_runner.go:130] > # ]
	I0422 11:44:55.355358   46587 command_runner.go:130] > # The port on which the metrics server will listen.
	I0422 11:44:55.355365   46587 command_runner.go:130] > # metrics_port = 9090
	I0422 11:44:55.355370   46587 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0422 11:44:55.355376   46587 command_runner.go:130] > # metrics_socket = ""
	I0422 11:44:55.355380   46587 command_runner.go:130] > # The certificate for the secure metrics server.
	I0422 11:44:55.355388   46587 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0422 11:44:55.355395   46587 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0422 11:44:55.355401   46587 command_runner.go:130] > # certificate on any modification event.
	I0422 11:44:55.355405   46587 command_runner.go:130] > # metrics_cert = ""
	I0422 11:44:55.355410   46587 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0422 11:44:55.355417   46587 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0422 11:44:55.355421   46587 command_runner.go:130] > # metrics_key = ""
	I0422 11:44:55.355433   46587 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0422 11:44:55.355439   46587 command_runner.go:130] > [crio.tracing]
	I0422 11:44:55.355444   46587 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0422 11:44:55.355451   46587 command_runner.go:130] > # enable_tracing = false
	I0422 11:44:55.355456   46587 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0422 11:44:55.355463   46587 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0422 11:44:55.355469   46587 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0422 11:44:55.355476   46587 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0422 11:44:55.355480   46587 command_runner.go:130] > # CRI-O NRI configuration.
	I0422 11:44:55.355484   46587 command_runner.go:130] > [crio.nri]
	I0422 11:44:55.355488   46587 command_runner.go:130] > # Globally enable or disable NRI.
	I0422 11:44:55.355492   46587 command_runner.go:130] > # enable_nri = false
	I0422 11:44:55.355496   46587 command_runner.go:130] > # NRI socket to listen on.
	I0422 11:44:55.355503   46587 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0422 11:44:55.355507   46587 command_runner.go:130] > # NRI plugin directory to use.
	I0422 11:44:55.355513   46587 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0422 11:44:55.355518   46587 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0422 11:44:55.355527   46587 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0422 11:44:55.355534   46587 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0422 11:44:55.355538   46587 command_runner.go:130] > # nri_disable_connections = false
	I0422 11:44:55.355545   46587 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0422 11:44:55.355549   46587 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0422 11:44:55.355554   46587 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0422 11:44:55.355561   46587 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0422 11:44:55.355567   46587 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0422 11:44:55.355571   46587 command_runner.go:130] > [crio.stats]
	I0422 11:44:55.355579   46587 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0422 11:44:55.355584   46587 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0422 11:44:55.355590   46587 command_runner.go:130] > # stats_collection_period = 0
	I0422 11:44:55.355611   46587 command_runner.go:130] ! time="2024-04-22 11:44:55.323156157Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0422 11:44:55.355624   46587 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0422 11:44:55.355749   46587 cni.go:84] Creating CNI manager for ""
	I0422 11:44:55.355765   46587 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0422 11:44:55.355775   46587 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 11:44:55.355794   46587 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-254635 NodeName:multinode-254635 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 11:44:55.355917   46587 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-254635"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.185
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 11:44:55.355973   46587 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 11:44:55.367279   46587 command_runner.go:130] > kubeadm
	I0422 11:44:55.367298   46587 command_runner.go:130] > kubectl
	I0422 11:44:55.367304   46587 command_runner.go:130] > kubelet
	I0422 11:44:55.367327   46587 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 11:44:55.367381   46587 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 11:44:55.377316   46587 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0422 11:44:55.397162   46587 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 11:44:55.415642   46587 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0422 11:44:55.435575   46587 ssh_runner.go:195] Run: grep 192.168.39.185	control-plane.minikube.internal$ /etc/hosts
	I0422 11:44:55.440222   46587 command_runner.go:130] > 192.168.39.185	control-plane.minikube.internal
	I0422 11:44:55.440383   46587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 11:44:55.588534   46587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 11:44:55.606404   46587 certs.go:68] Setting up /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/multinode-254635 for IP: 192.168.39.185
	I0422 11:44:55.606425   46587 certs.go:194] generating shared ca certs ...
	I0422 11:44:55.606446   46587 certs.go:226] acquiring lock for ca certs: {Name:mk0b77082b88c771d0b00be5267ca31dfee6f85a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 11:44:55.606589   46587 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key
	I0422 11:44:55.606648   46587 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key
	I0422 11:44:55.606661   46587 certs.go:256] generating profile certs ...
	I0422 11:44:55.606748   46587 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/multinode-254635/client.key
	I0422 11:44:55.606833   46587 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/multinode-254635/apiserver.key.8cc66a77
	I0422 11:44:55.606885   46587 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/multinode-254635/proxy-client.key
	I0422 11:44:55.606902   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0422 11:44:55.606924   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0422 11:44:55.606943   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0422 11:44:55.606958   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0422 11:44:55.606976   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/multinode-254635/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0422 11:44:55.606993   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/multinode-254635/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0422 11:44:55.607012   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/multinode-254635/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0422 11:44:55.607028   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/multinode-254635/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0422 11:44:55.607098   46587 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem (1338 bytes)
	W0422 11:44:55.607137   46587 certs.go:480] ignoring /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945_empty.pem, impossibly tiny 0 bytes
	I0422 11:44:55.607151   46587 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem (1679 bytes)
	I0422 11:44:55.607185   46587 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem (1078 bytes)
	I0422 11:44:55.607232   46587 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem (1123 bytes)
	I0422 11:44:55.607273   46587 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem (1679 bytes)
	I0422 11:44:55.607328   46587 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem (1708 bytes)
	I0422 11:44:55.607366   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:44:55.607386   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem -> /usr/share/ca-certificates/14945.pem
	I0422 11:44:55.607406   46587 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> /usr/share/ca-certificates/149452.pem
	I0422 11:44:55.607954   46587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 11:44:55.633711   46587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 11:44:55.661079   46587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 11:44:55.687892   46587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0422 11:44:55.715749   46587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/multinode-254635/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0422 11:44:55.743775   46587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/multinode-254635/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 11:44:55.771803   46587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/multinode-254635/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 11:44:55.799297   46587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/multinode-254635/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 11:44:55.826745   46587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 11:44:55.854629   46587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem --> /usr/share/ca-certificates/14945.pem (1338 bytes)
	I0422 11:44:55.882652   46587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem --> /usr/share/ca-certificates/149452.pem (1708 bytes)
	I0422 11:44:55.910047   46587 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 11:44:55.930007   46587 ssh_runner.go:195] Run: openssl version
	I0422 11:44:55.937116   46587 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0422 11:44:55.937198   46587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 11:44:55.949184   46587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:44:55.954172   46587 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 22 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:44:55.954373   46587 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:44:55.954429   46587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 11:44:55.960525   46587 command_runner.go:130] > b5213941
	I0422 11:44:55.960608   46587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 11:44:55.970796   46587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14945.pem && ln -fs /usr/share/ca-certificates/14945.pem /etc/ssl/certs/14945.pem"
	I0422 11:44:55.982452   46587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14945.pem
	I0422 11:44:55.987389   46587 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 22 10:51 /usr/share/ca-certificates/14945.pem
	I0422 11:44:55.987482   46587 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 10:51 /usr/share/ca-certificates/14945.pem
	I0422 11:44:55.987533   46587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14945.pem
	I0422 11:44:55.993542   46587 command_runner.go:130] > 51391683
	I0422 11:44:55.993799   46587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14945.pem /etc/ssl/certs/51391683.0"
	I0422 11:44:56.003701   46587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149452.pem && ln -fs /usr/share/ca-certificates/149452.pem /etc/ssl/certs/149452.pem"
	I0422 11:44:56.015412   46587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149452.pem
	I0422 11:44:56.020345   46587 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 22 10:51 /usr/share/ca-certificates/149452.pem
	I0422 11:44:56.020464   46587 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 10:51 /usr/share/ca-certificates/149452.pem
	I0422 11:44:56.020515   46587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149452.pem
	I0422 11:44:56.027065   46587 command_runner.go:130] > 3ec20f2e
	I0422 11:44:56.027217   46587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149452.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 11:44:56.037205   46587 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 11:44:56.042671   46587 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 11:44:56.042699   46587 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0422 11:44:56.042723   46587 command_runner.go:130] > Device: 253,1	Inode: 6292502     Links: 1
	I0422 11:44:56.042739   46587 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0422 11:44:56.042749   46587 command_runner.go:130] > Access: 2024-04-22 11:38:03.510650914 +0000
	I0422 11:44:56.042757   46587 command_runner.go:130] > Modify: 2024-04-22 11:38:03.510650914 +0000
	I0422 11:44:56.042769   46587 command_runner.go:130] > Change: 2024-04-22 11:38:03.510650914 +0000
	I0422 11:44:56.042777   46587 command_runner.go:130] >  Birth: 2024-04-22 11:38:03.510650914 +0000
	I0422 11:44:56.042845   46587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 11:44:56.049708   46587 command_runner.go:130] > Certificate will not expire
	I0422 11:44:56.049918   46587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 11:44:56.056172   46587 command_runner.go:130] > Certificate will not expire
	I0422 11:44:56.056237   46587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 11:44:56.062119   46587 command_runner.go:130] > Certificate will not expire
	I0422 11:44:56.062320   46587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 11:44:56.068483   46587 command_runner.go:130] > Certificate will not expire
	I0422 11:44:56.068536   46587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 11:44:56.074648   46587 command_runner.go:130] > Certificate will not expire
	I0422 11:44:56.074762   46587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 11:44:56.080612   46587 command_runner.go:130] > Certificate will not expire
	I0422 11:44:56.080984   46587 kubeadm.go:391] StartCluster: {Name:multinode-254635 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-254635
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.75 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:f
alse istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 11:44:56.081129   46587 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 11:44:56.081186   46587 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 11:44:56.120475   46587 command_runner.go:130] > 11c87675d112df1f4e12a819757b733862cca0a7eccb55f1d72d483e254ce402
	I0422 11:44:56.120503   46587 command_runner.go:130] > c5ac398b3838a4544e429afdc4ec699c532240a711df54d0ff54f626894fd3c3
	I0422 11:44:56.120512   46587 command_runner.go:130] > 70ea62bce313978f143670020dbcaed41edb5279e812840d18fa210fbf68433d
	I0422 11:44:56.120521   46587 command_runner.go:130] > 8bd2ac5a2adfeb536099f59cf363bbbde81f2e3983e1d6c18a1f6565651e8ed9
	I0422 11:44:56.120529   46587 command_runner.go:130] > 07a0b4812dd3b1c6a7f0c82617d26c0ccc45b8b8e1d30d6c318f8bda12735f0b
	I0422 11:44:56.120538   46587 command_runner.go:130] > d3b8493457784233fb659b95632fa92367fa72fc86b760ee436e0ad6468bd664
	I0422 11:44:56.120550   46587 command_runner.go:130] > d66e29130d9c973c5174eff5a88cb844d52b9fa38ad6333085bb66c3bd155697
	I0422 11:44:56.120565   46587 command_runner.go:130] > 7c0d3bf49be403d6755298c16ec74a2883dc6b3e3c8efde6968515c7bc280b9c
	I0422 11:44:56.120589   46587 cri.go:89] found id: "11c87675d112df1f4e12a819757b733862cca0a7eccb55f1d72d483e254ce402"
	I0422 11:44:56.120599   46587 cri.go:89] found id: "c5ac398b3838a4544e429afdc4ec699c532240a711df54d0ff54f626894fd3c3"
	I0422 11:44:56.120602   46587 cri.go:89] found id: "70ea62bce313978f143670020dbcaed41edb5279e812840d18fa210fbf68433d"
	I0422 11:44:56.120605   46587 cri.go:89] found id: "8bd2ac5a2adfeb536099f59cf363bbbde81f2e3983e1d6c18a1f6565651e8ed9"
	I0422 11:44:56.120613   46587 cri.go:89] found id: "07a0b4812dd3b1c6a7f0c82617d26c0ccc45b8b8e1d30d6c318f8bda12735f0b"
	I0422 11:44:56.120616   46587 cri.go:89] found id: "d3b8493457784233fb659b95632fa92367fa72fc86b760ee436e0ad6468bd664"
	I0422 11:44:56.120619   46587 cri.go:89] found id: "d66e29130d9c973c5174eff5a88cb844d52b9fa38ad6333085bb66c3bd155697"
	I0422 11:44:56.120621   46587 cri.go:89] found id: "7c0d3bf49be403d6755298c16ec74a2883dc6b3e3c8efde6968515c7bc280b9c"
	I0422 11:44:56.120624   46587 cri.go:89] found id: ""
	I0422 11:44:56.120662   46587 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 22 11:48:51 multinode-254635 crio[2862]: time="2024-04-22 11:48:51.827779601Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713786531827758220,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=386378c9-c72e-45ed-ac2c-8532cd41b4f7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:48:51 multinode-254635 crio[2862]: time="2024-04-22 11:48:51.829301038Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cb40adcd-5e82-49b1-8954-44f0b5983c4d name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:48:51 multinode-254635 crio[2862]: time="2024-04-22 11:48:51.829383938Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cb40adcd-5e82-49b1-8954-44f0b5983c4d name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:48:51 multinode-254635 crio[2862]: time="2024-04-22 11:48:51.829909969Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4dd7f77a62427cc79a55da96472d737fa62edd2e32f2a59f9eeb95d7e3cee8b4,PodSandboxId:a4f9db39efb9d57360a91d996f7d4fca5f95b3b4b4a62ee2d15ff003863d5196,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713786336130816510,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w6wst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec3be7d9-b316-43ba-8c05-c028f530c07e,},Annotations:map[string]string{io.kubernetes.container.hash: d354c3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:170cc5dfa96c9013a5da4901c9998afe1e59779fda2e4d36d4697b12c1e7dc34,PodSandboxId:98087e18a7cbb0bd82bb75f40cd5ba1782fa267c12b76ca45bf0562e5a25ef38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713786302535530877,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jzhvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 848b349d-906a-411c-a60b-b559d47ad2a7,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4ed2d1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aee9691b01261c2c6b2edb4a38d63b27aa00f3c67567e1829e719016c997dc4,PodSandboxId:f9a67731119452cc2f7e5efd38ec79284dc18238337d6f5aabf1304b57ab67b6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713786302438370128,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-858b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 457a81ab-ca6c-4757-92b1-734ba151216f,},Annotations:map[string]string{io.kubernetes.container.hash: 648abffd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0479f88c8f22fadfd7ca5c88a541baead1e410717b9d83c5f6e8c9c81026cd90,PodSandboxId:a042cf023bf4e218fa5f8b26e1a1b677b8163df69466c1d490db5763d8a85265,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713786302404859701,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mr7rq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a91e327-1478-4e50-9993-de3d5406efaa,},Annotations:map[string]s
tring{io.kubernetes.container.hash: b4fca94f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9048b9918b26e868f7f5bb8d1b1b1f3370ac6cdc2da10608141d9b14c76858e3,PodSandboxId:1ab69924cd55ac55cfdb620fa2405404172a58fbf0cf79574173e4aa7793996e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713786302415868649,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82216f3b-f366-4b55-893a-8f7c1b59372b,},Annotations:map[string]string{io.kub
ernetes.container.hash: 42acbef6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cab8f0adadfda9e02f9436c2cee58b7ec4d68640fc4514df80155477e55ffd7c,PodSandboxId:b376e76e22dd60e25bc96a0bc0f85f1612608a0606260ba99efdc85ce7e34bb8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713786298615575062,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346d24e136b744e11f51aaf0b32cfabc,},Annotations:map[string]string{io.kubernetes.container.hash: ea3273f1,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4867fbb06694830cf22ead69bf0ddd10a883b530a624a5e9b3b78fa115b0bc2,PodSandboxId:c8783ce123c8f90199c1c8c7247f52091596152d1373914113027e71aa5ef328,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713786298590465244,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7dbbfc94d550094389016edf0d994af,},Annotations:map[string]string{io.kubernetes.container.hash: 933c335
1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9631acc3cd8008ea734f6268932041e8e0e08d96b2532faa4be1d1e017eae954,PodSandboxId:fd54210addb9cd6bce92a9348095b27c8b805d8be5d54b8c974fc492fc55dc7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713786298489648163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 739aac7b3eff66515aa3886c2a1e8a1f,},Annotations:map[string]string{io.kubernetes.container.hash: 55596331,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86609797440bf5cb0ebb23673aadaa0c528eea783ee8792fab8e9c928d17a31c,PodSandboxId:af59cf4622968ecd5d9b3cc728998c2bb506c8c387315d19b068a53faf38db31,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713786298509072759,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b4e82c7f0c63c79504c005bee34fab,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d119fbdf20b5c9a472fff8e3b1e684445daab02f1bbdcea33624195a806c4ad,PodSandboxId:a4e2b504f1ee11299039caae119b3822cb2845449805ab51d09c66c02f259520,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713785987940597002,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w6wst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec3be7d9-b316-43ba-8c05-c028f530c07e,},Annotations:map[string]string{io.kubernetes.container.hash: d354c3c,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c87675d112df1f4e12a819757b733862cca0a7eccb55f1d72d483e254ce402,PodSandboxId:68f35a30ef2888e4d8c3443bed165372125c99ec4c90ad00e5788f23829e37a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713785939563513704,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82216f3b-f366-4b55-893a-8f7c1b59372b,},Annotations:map[string]string{io.kubernetes.container.hash: 42acbef6,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5ac398b3838a4544e429afdc4ec699c532240a711df54d0ff54f626894fd3c3,PodSandboxId:424d2c496a3dea7ca547f4fac7ee1fedc8712d6349f046b99e0e60337a89ae4d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713785939558527459,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-858b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 457a81ab-ca6c-4757-92b1-734ba151216f,},Annotations:map[string]string{io.kubernetes.container.hash: 648abffd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70ea62bce313978f143670020dbcaed41edb5279e812840d18fa210fbf68433d,PodSandboxId:f8b84bce701b57e793df270099e01b32c56bdae6c38be83e6b2821890bd56005,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713785908412292124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mr7rq,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3a91e327-1478-4e50-9993-de3d5406efaa,},Annotations:map[string]string{io.kubernetes.container.hash: b4fca94f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd2ac5a2adfeb536099f59cf363bbbde81f2e3983e1d6c18a1f6565651e8ed9,PodSandboxId:5302e172d2439c6f7ab662cf2a92c5a8e3b4fdec4ca08af14bb0900e2b6db4db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713785907875455356,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jzhvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 848b349d-906a-411c-a60b-b
559d47ad2a7,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4ed2d1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07a0b4812dd3b1c6a7f0c82617d26c0ccc45b8b8e1d30d6c318f8bda12735f0b,PodSandboxId:8eee2070b37b5e718fe58a882a7c2dd6170f42ad7a942a9c831405dea835c4b8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713785887893260405,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b4e82c7f0c63c79504c005bee34fab,},A
nnotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3b8493457784233fb659b95632fa92367fa72fc86b760ee436e0ad6468bd664,PodSandboxId:dde06e3757bdcbf4a3df03f5af8d551716129cab83db3c61cd492162196df95f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713785887846208182,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346d24e136b744e11f51aaf0b32cfabc,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: ea3273f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66e29130d9c973c5174eff5a88cb844d52b9fa38ad6333085bb66c3bd155697,PodSandboxId:174bda2a37e43f7a336de9740c6f681c2ffbc130efbd2576cd3af6aa0b68fdb6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713785887816866063,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7dbbfc94d550094389016edf0d994af,},Annotations:map[string]string{io.k
ubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c0d3bf49be403d6755298c16ec74a2883dc6b3e3c8efde6968515c7bc280b9c,PodSandboxId:1cd9c0443939a46f232ffda1c018b1c89230a06ff670eadbe540f5e61af211f2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713785887803677671,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 739aac7b3eff66515aa3886c2a1e8a1f,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 55596331,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cb40adcd-5e82-49b1-8954-44f0b5983c4d name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:48:51 multinode-254635 crio[2862]: time="2024-04-22 11:48:51.876112348Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b613a9f-773a-4673-a363-ede44e058ecb name=/runtime.v1.RuntimeService/Version
	Apr 22 11:48:51 multinode-254635 crio[2862]: time="2024-04-22 11:48:51.876325617Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b613a9f-773a-4673-a363-ede44e058ecb name=/runtime.v1.RuntimeService/Version
	Apr 22 11:48:51 multinode-254635 crio[2862]: time="2024-04-22 11:48:51.877616693Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=37ca16d4-18f9-4d67-afb6-b1ac2923eed4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:48:51 multinode-254635 crio[2862]: time="2024-04-22 11:48:51.878168478Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713786531878144798,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=37ca16d4-18f9-4d67-afb6-b1ac2923eed4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:48:51 multinode-254635 crio[2862]: time="2024-04-22 11:48:51.878880642Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=52e3d95d-a4f3-4927-847e-9d88b5c03e11 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:48:51 multinode-254635 crio[2862]: time="2024-04-22 11:48:51.878971027Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=52e3d95d-a4f3-4927-847e-9d88b5c03e11 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:48:51 multinode-254635 crio[2862]: time="2024-04-22 11:48:51.879348215Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4dd7f77a62427cc79a55da96472d737fa62edd2e32f2a59f9eeb95d7e3cee8b4,PodSandboxId:a4f9db39efb9d57360a91d996f7d4fca5f95b3b4b4a62ee2d15ff003863d5196,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713786336130816510,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w6wst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec3be7d9-b316-43ba-8c05-c028f530c07e,},Annotations:map[string]string{io.kubernetes.container.hash: d354c3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:170cc5dfa96c9013a5da4901c9998afe1e59779fda2e4d36d4697b12c1e7dc34,PodSandboxId:98087e18a7cbb0bd82bb75f40cd5ba1782fa267c12b76ca45bf0562e5a25ef38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713786302535530877,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jzhvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 848b349d-906a-411c-a60b-b559d47ad2a7,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4ed2d1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aee9691b01261c2c6b2edb4a38d63b27aa00f3c67567e1829e719016c997dc4,PodSandboxId:f9a67731119452cc2f7e5efd38ec79284dc18238337d6f5aabf1304b57ab67b6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713786302438370128,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-858b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 457a81ab-ca6c-4757-92b1-734ba151216f,},Annotations:map[string]string{io.kubernetes.container.hash: 648abffd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0479f88c8f22fadfd7ca5c88a541baead1e410717b9d83c5f6e8c9c81026cd90,PodSandboxId:a042cf023bf4e218fa5f8b26e1a1b677b8163df69466c1d490db5763d8a85265,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713786302404859701,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mr7rq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a91e327-1478-4e50-9993-de3d5406efaa,},Annotations:map[string]s
tring{io.kubernetes.container.hash: b4fca94f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9048b9918b26e868f7f5bb8d1b1b1f3370ac6cdc2da10608141d9b14c76858e3,PodSandboxId:1ab69924cd55ac55cfdb620fa2405404172a58fbf0cf79574173e4aa7793996e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713786302415868649,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82216f3b-f366-4b55-893a-8f7c1b59372b,},Annotations:map[string]string{io.kub
ernetes.container.hash: 42acbef6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cab8f0adadfda9e02f9436c2cee58b7ec4d68640fc4514df80155477e55ffd7c,PodSandboxId:b376e76e22dd60e25bc96a0bc0f85f1612608a0606260ba99efdc85ce7e34bb8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713786298615575062,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346d24e136b744e11f51aaf0b32cfabc,},Annotations:map[string]string{io.kubernetes.container.hash: ea3273f1,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4867fbb06694830cf22ead69bf0ddd10a883b530a624a5e9b3b78fa115b0bc2,PodSandboxId:c8783ce123c8f90199c1c8c7247f52091596152d1373914113027e71aa5ef328,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713786298590465244,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7dbbfc94d550094389016edf0d994af,},Annotations:map[string]string{io.kubernetes.container.hash: 933c335
1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9631acc3cd8008ea734f6268932041e8e0e08d96b2532faa4be1d1e017eae954,PodSandboxId:fd54210addb9cd6bce92a9348095b27c8b805d8be5d54b8c974fc492fc55dc7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713786298489648163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 739aac7b3eff66515aa3886c2a1e8a1f,},Annotations:map[string]string{io.kubernetes.container.hash: 55596331,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86609797440bf5cb0ebb23673aadaa0c528eea783ee8792fab8e9c928d17a31c,PodSandboxId:af59cf4622968ecd5d9b3cc728998c2bb506c8c387315d19b068a53faf38db31,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713786298509072759,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b4e82c7f0c63c79504c005bee34fab,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d119fbdf20b5c9a472fff8e3b1e684445daab02f1bbdcea33624195a806c4ad,PodSandboxId:a4e2b504f1ee11299039caae119b3822cb2845449805ab51d09c66c02f259520,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713785987940597002,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w6wst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec3be7d9-b316-43ba-8c05-c028f530c07e,},Annotations:map[string]string{io.kubernetes.container.hash: d354c3c,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c87675d112df1f4e12a819757b733862cca0a7eccb55f1d72d483e254ce402,PodSandboxId:68f35a30ef2888e4d8c3443bed165372125c99ec4c90ad00e5788f23829e37a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713785939563513704,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82216f3b-f366-4b55-893a-8f7c1b59372b,},Annotations:map[string]string{io.kubernetes.container.hash: 42acbef6,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5ac398b3838a4544e429afdc4ec699c532240a711df54d0ff54f626894fd3c3,PodSandboxId:424d2c496a3dea7ca547f4fac7ee1fedc8712d6349f046b99e0e60337a89ae4d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713785939558527459,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-858b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 457a81ab-ca6c-4757-92b1-734ba151216f,},Annotations:map[string]string{io.kubernetes.container.hash: 648abffd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70ea62bce313978f143670020dbcaed41edb5279e812840d18fa210fbf68433d,PodSandboxId:f8b84bce701b57e793df270099e01b32c56bdae6c38be83e6b2821890bd56005,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713785908412292124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mr7rq,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3a91e327-1478-4e50-9993-de3d5406efaa,},Annotations:map[string]string{io.kubernetes.container.hash: b4fca94f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd2ac5a2adfeb536099f59cf363bbbde81f2e3983e1d6c18a1f6565651e8ed9,PodSandboxId:5302e172d2439c6f7ab662cf2a92c5a8e3b4fdec4ca08af14bb0900e2b6db4db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713785907875455356,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jzhvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 848b349d-906a-411c-a60b-b
559d47ad2a7,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4ed2d1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07a0b4812dd3b1c6a7f0c82617d26c0ccc45b8b8e1d30d6c318f8bda12735f0b,PodSandboxId:8eee2070b37b5e718fe58a882a7c2dd6170f42ad7a942a9c831405dea835c4b8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713785887893260405,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b4e82c7f0c63c79504c005bee34fab,},A
nnotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3b8493457784233fb659b95632fa92367fa72fc86b760ee436e0ad6468bd664,PodSandboxId:dde06e3757bdcbf4a3df03f5af8d551716129cab83db3c61cd492162196df95f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713785887846208182,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346d24e136b744e11f51aaf0b32cfabc,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: ea3273f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66e29130d9c973c5174eff5a88cb844d52b9fa38ad6333085bb66c3bd155697,PodSandboxId:174bda2a37e43f7a336de9740c6f681c2ffbc130efbd2576cd3af6aa0b68fdb6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713785887816866063,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7dbbfc94d550094389016edf0d994af,},Annotations:map[string]string{io.k
ubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c0d3bf49be403d6755298c16ec74a2883dc6b3e3c8efde6968515c7bc280b9c,PodSandboxId:1cd9c0443939a46f232ffda1c018b1c89230a06ff670eadbe540f5e61af211f2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713785887803677671,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 739aac7b3eff66515aa3886c2a1e8a1f,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 55596331,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=52e3d95d-a4f3-4927-847e-9d88b5c03e11 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:48:51 multinode-254635 crio[2862]: time="2024-04-22 11:48:51.924353496Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7ffa9a5f-dcb7-4972-a42c-8213e270e883 name=/runtime.v1.RuntimeService/Version
	Apr 22 11:48:51 multinode-254635 crio[2862]: time="2024-04-22 11:48:51.924451201Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7ffa9a5f-dcb7-4972-a42c-8213e270e883 name=/runtime.v1.RuntimeService/Version
	Apr 22 11:48:51 multinode-254635 crio[2862]: time="2024-04-22 11:48:51.926106107Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9db07ac8-03ba-4044-83f1-86571abd084b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:48:51 multinode-254635 crio[2862]: time="2024-04-22 11:48:51.926500653Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713786531926478424,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9db07ac8-03ba-4044-83f1-86571abd084b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:48:51 multinode-254635 crio[2862]: time="2024-04-22 11:48:51.927367514Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19e01b72-2ea3-4679-997c-0e3daa9ec4ef name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:48:51 multinode-254635 crio[2862]: time="2024-04-22 11:48:51.927455594Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19e01b72-2ea3-4679-997c-0e3daa9ec4ef name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:48:51 multinode-254635 crio[2862]: time="2024-04-22 11:48:51.931002577Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4dd7f77a62427cc79a55da96472d737fa62edd2e32f2a59f9eeb95d7e3cee8b4,PodSandboxId:a4f9db39efb9d57360a91d996f7d4fca5f95b3b4b4a62ee2d15ff003863d5196,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713786336130816510,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w6wst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec3be7d9-b316-43ba-8c05-c028f530c07e,},Annotations:map[string]string{io.kubernetes.container.hash: d354c3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:170cc5dfa96c9013a5da4901c9998afe1e59779fda2e4d36d4697b12c1e7dc34,PodSandboxId:98087e18a7cbb0bd82bb75f40cd5ba1782fa267c12b76ca45bf0562e5a25ef38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713786302535530877,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jzhvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 848b349d-906a-411c-a60b-b559d47ad2a7,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4ed2d1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aee9691b01261c2c6b2edb4a38d63b27aa00f3c67567e1829e719016c997dc4,PodSandboxId:f9a67731119452cc2f7e5efd38ec79284dc18238337d6f5aabf1304b57ab67b6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713786302438370128,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-858b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 457a81ab-ca6c-4757-92b1-734ba151216f,},Annotations:map[string]string{io.kubernetes.container.hash: 648abffd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0479f88c8f22fadfd7ca5c88a541baead1e410717b9d83c5f6e8c9c81026cd90,PodSandboxId:a042cf023bf4e218fa5f8b26e1a1b677b8163df69466c1d490db5763d8a85265,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713786302404859701,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mr7rq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a91e327-1478-4e50-9993-de3d5406efaa,},Annotations:map[string]s
tring{io.kubernetes.container.hash: b4fca94f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9048b9918b26e868f7f5bb8d1b1b1f3370ac6cdc2da10608141d9b14c76858e3,PodSandboxId:1ab69924cd55ac55cfdb620fa2405404172a58fbf0cf79574173e4aa7793996e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713786302415868649,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82216f3b-f366-4b55-893a-8f7c1b59372b,},Annotations:map[string]string{io.kub
ernetes.container.hash: 42acbef6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cab8f0adadfda9e02f9436c2cee58b7ec4d68640fc4514df80155477e55ffd7c,PodSandboxId:b376e76e22dd60e25bc96a0bc0f85f1612608a0606260ba99efdc85ce7e34bb8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713786298615575062,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346d24e136b744e11f51aaf0b32cfabc,},Annotations:map[string]string{io.kubernetes.container.hash: ea3273f1,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4867fbb06694830cf22ead69bf0ddd10a883b530a624a5e9b3b78fa115b0bc2,PodSandboxId:c8783ce123c8f90199c1c8c7247f52091596152d1373914113027e71aa5ef328,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713786298590465244,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7dbbfc94d550094389016edf0d994af,},Annotations:map[string]string{io.kubernetes.container.hash: 933c335
1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9631acc3cd8008ea734f6268932041e8e0e08d96b2532faa4be1d1e017eae954,PodSandboxId:fd54210addb9cd6bce92a9348095b27c8b805d8be5d54b8c974fc492fc55dc7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713786298489648163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 739aac7b3eff66515aa3886c2a1e8a1f,},Annotations:map[string]string{io.kubernetes.container.hash: 55596331,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86609797440bf5cb0ebb23673aadaa0c528eea783ee8792fab8e9c928d17a31c,PodSandboxId:af59cf4622968ecd5d9b3cc728998c2bb506c8c387315d19b068a53faf38db31,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713786298509072759,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b4e82c7f0c63c79504c005bee34fab,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d119fbdf20b5c9a472fff8e3b1e684445daab02f1bbdcea33624195a806c4ad,PodSandboxId:a4e2b504f1ee11299039caae119b3822cb2845449805ab51d09c66c02f259520,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713785987940597002,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w6wst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec3be7d9-b316-43ba-8c05-c028f530c07e,},Annotations:map[string]string{io.kubernetes.container.hash: d354c3c,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c87675d112df1f4e12a819757b733862cca0a7eccb55f1d72d483e254ce402,PodSandboxId:68f35a30ef2888e4d8c3443bed165372125c99ec4c90ad00e5788f23829e37a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713785939563513704,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82216f3b-f366-4b55-893a-8f7c1b59372b,},Annotations:map[string]string{io.kubernetes.container.hash: 42acbef6,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5ac398b3838a4544e429afdc4ec699c532240a711df54d0ff54f626894fd3c3,PodSandboxId:424d2c496a3dea7ca547f4fac7ee1fedc8712d6349f046b99e0e60337a89ae4d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713785939558527459,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-858b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 457a81ab-ca6c-4757-92b1-734ba151216f,},Annotations:map[string]string{io.kubernetes.container.hash: 648abffd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70ea62bce313978f143670020dbcaed41edb5279e812840d18fa210fbf68433d,PodSandboxId:f8b84bce701b57e793df270099e01b32c56bdae6c38be83e6b2821890bd56005,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713785908412292124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mr7rq,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3a91e327-1478-4e50-9993-de3d5406efaa,},Annotations:map[string]string{io.kubernetes.container.hash: b4fca94f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd2ac5a2adfeb536099f59cf363bbbde81f2e3983e1d6c18a1f6565651e8ed9,PodSandboxId:5302e172d2439c6f7ab662cf2a92c5a8e3b4fdec4ca08af14bb0900e2b6db4db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713785907875455356,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jzhvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 848b349d-906a-411c-a60b-b
559d47ad2a7,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4ed2d1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07a0b4812dd3b1c6a7f0c82617d26c0ccc45b8b8e1d30d6c318f8bda12735f0b,PodSandboxId:8eee2070b37b5e718fe58a882a7c2dd6170f42ad7a942a9c831405dea835c4b8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713785887893260405,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b4e82c7f0c63c79504c005bee34fab,},A
nnotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3b8493457784233fb659b95632fa92367fa72fc86b760ee436e0ad6468bd664,PodSandboxId:dde06e3757bdcbf4a3df03f5af8d551716129cab83db3c61cd492162196df95f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713785887846208182,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346d24e136b744e11f51aaf0b32cfabc,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: ea3273f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66e29130d9c973c5174eff5a88cb844d52b9fa38ad6333085bb66c3bd155697,PodSandboxId:174bda2a37e43f7a336de9740c6f681c2ffbc130efbd2576cd3af6aa0b68fdb6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713785887816866063,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7dbbfc94d550094389016edf0d994af,},Annotations:map[string]string{io.k
ubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c0d3bf49be403d6755298c16ec74a2883dc6b3e3c8efde6968515c7bc280b9c,PodSandboxId:1cd9c0443939a46f232ffda1c018b1c89230a06ff670eadbe540f5e61af211f2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713785887803677671,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 739aac7b3eff66515aa3886c2a1e8a1f,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 55596331,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=19e01b72-2ea3-4679-997c-0e3daa9ec4ef name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:48:51 multinode-254635 crio[2862]: time="2024-04-22 11:48:51.979335539Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ab3455d1-690b-4de8-93bc-0be285b2ceb2 name=/runtime.v1.RuntimeService/Version
	Apr 22 11:48:51 multinode-254635 crio[2862]: time="2024-04-22 11:48:51.979438733Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ab3455d1-690b-4de8-93bc-0be285b2ceb2 name=/runtime.v1.RuntimeService/Version
	Apr 22 11:48:51 multinode-254635 crio[2862]: time="2024-04-22 11:48:51.981230777Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a3ee5442-8789-41de-8485-7ade8737d6a4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:48:51 multinode-254635 crio[2862]: time="2024-04-22 11:48:51.982012866Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713786531981984189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a3ee5442-8789-41de-8485-7ade8737d6a4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 11:48:51 multinode-254635 crio[2862]: time="2024-04-22 11:48:51.983206749Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17ce7b32-fb97-4802-99a5-a29e01bf7223 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:48:51 multinode-254635 crio[2862]: time="2024-04-22 11:48:51.983288981Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17ce7b32-fb97-4802-99a5-a29e01bf7223 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 11:48:51 multinode-254635 crio[2862]: time="2024-04-22 11:48:51.983613123Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4dd7f77a62427cc79a55da96472d737fa62edd2e32f2a59f9eeb95d7e3cee8b4,PodSandboxId:a4f9db39efb9d57360a91d996f7d4fca5f95b3b4b4a62ee2d15ff003863d5196,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713786336130816510,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w6wst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec3be7d9-b316-43ba-8c05-c028f530c07e,},Annotations:map[string]string{io.kubernetes.container.hash: d354c3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:170cc5dfa96c9013a5da4901c9998afe1e59779fda2e4d36d4697b12c1e7dc34,PodSandboxId:98087e18a7cbb0bd82bb75f40cd5ba1782fa267c12b76ca45bf0562e5a25ef38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713786302535530877,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jzhvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 848b349d-906a-411c-a60b-b559d47ad2a7,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4ed2d1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9aee9691b01261c2c6b2edb4a38d63b27aa00f3c67567e1829e719016c997dc4,PodSandboxId:f9a67731119452cc2f7e5efd38ec79284dc18238337d6f5aabf1304b57ab67b6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713786302438370128,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-858b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 457a81ab-ca6c-4757-92b1-734ba151216f,},Annotations:map[string]string{io.kubernetes.container.hash: 648abffd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0479f88c8f22fadfd7ca5c88a541baead1e410717b9d83c5f6e8c9c81026cd90,PodSandboxId:a042cf023bf4e218fa5f8b26e1a1b677b8163df69466c1d490db5763d8a85265,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713786302404859701,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mr7rq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a91e327-1478-4e50-9993-de3d5406efaa,},Annotations:map[string]s
tring{io.kubernetes.container.hash: b4fca94f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9048b9918b26e868f7f5bb8d1b1b1f3370ac6cdc2da10608141d9b14c76858e3,PodSandboxId:1ab69924cd55ac55cfdb620fa2405404172a58fbf0cf79574173e4aa7793996e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713786302415868649,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82216f3b-f366-4b55-893a-8f7c1b59372b,},Annotations:map[string]string{io.kub
ernetes.container.hash: 42acbef6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cab8f0adadfda9e02f9436c2cee58b7ec4d68640fc4514df80155477e55ffd7c,PodSandboxId:b376e76e22dd60e25bc96a0bc0f85f1612608a0606260ba99efdc85ce7e34bb8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713786298615575062,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346d24e136b744e11f51aaf0b32cfabc,},Annotations:map[string]string{io.kubernetes.container.hash: ea3273f1,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4867fbb06694830cf22ead69bf0ddd10a883b530a624a5e9b3b78fa115b0bc2,PodSandboxId:c8783ce123c8f90199c1c8c7247f52091596152d1373914113027e71aa5ef328,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713786298590465244,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7dbbfc94d550094389016edf0d994af,},Annotations:map[string]string{io.kubernetes.container.hash: 933c335
1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9631acc3cd8008ea734f6268932041e8e0e08d96b2532faa4be1d1e017eae954,PodSandboxId:fd54210addb9cd6bce92a9348095b27c8b805d8be5d54b8c974fc492fc55dc7f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713786298489648163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 739aac7b3eff66515aa3886c2a1e8a1f,},Annotations:map[string]string{io.kubernetes.container.hash: 55596331,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86609797440bf5cb0ebb23673aadaa0c528eea783ee8792fab8e9c928d17a31c,PodSandboxId:af59cf4622968ecd5d9b3cc728998c2bb506c8c387315d19b068a53faf38db31,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713786298509072759,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b4e82c7f0c63c79504c005bee34fab,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d119fbdf20b5c9a472fff8e3b1e684445daab02f1bbdcea33624195a806c4ad,PodSandboxId:a4e2b504f1ee11299039caae119b3822cb2845449805ab51d09c66c02f259520,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713785987940597002,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-w6wst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec3be7d9-b316-43ba-8c05-c028f530c07e,},Annotations:map[string]string{io.kubernetes.container.hash: d354c3c,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c87675d112df1f4e12a819757b733862cca0a7eccb55f1d72d483e254ce402,PodSandboxId:68f35a30ef2888e4d8c3443bed165372125c99ec4c90ad00e5788f23829e37a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713785939563513704,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82216f3b-f366-4b55-893a-8f7c1b59372b,},Annotations:map[string]string{io.kubernetes.container.hash: 42acbef6,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5ac398b3838a4544e429afdc4ec699c532240a711df54d0ff54f626894fd3c3,PodSandboxId:424d2c496a3dea7ca547f4fac7ee1fedc8712d6349f046b99e0e60337a89ae4d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713785939558527459,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-858b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 457a81ab-ca6c-4757-92b1-734ba151216f,},Annotations:map[string]string{io.kubernetes.container.hash: 648abffd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70ea62bce313978f143670020dbcaed41edb5279e812840d18fa210fbf68433d,PodSandboxId:f8b84bce701b57e793df270099e01b32c56bdae6c38be83e6b2821890bd56005,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713785908412292124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mr7rq,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3a91e327-1478-4e50-9993-de3d5406efaa,},Annotations:map[string]string{io.kubernetes.container.hash: b4fca94f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd2ac5a2adfeb536099f59cf363bbbde81f2e3983e1d6c18a1f6565651e8ed9,PodSandboxId:5302e172d2439c6f7ab662cf2a92c5a8e3b4fdec4ca08af14bb0900e2b6db4db,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713785907875455356,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jzhvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 848b349d-906a-411c-a60b-b
559d47ad2a7,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4ed2d1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07a0b4812dd3b1c6a7f0c82617d26c0ccc45b8b8e1d30d6c318f8bda12735f0b,PodSandboxId:8eee2070b37b5e718fe58a882a7c2dd6170f42ad7a942a9c831405dea835c4b8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713785887893260405,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b4e82c7f0c63c79504c005bee34fab,},A
nnotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3b8493457784233fb659b95632fa92367fa72fc86b760ee436e0ad6468bd664,PodSandboxId:dde06e3757bdcbf4a3df03f5af8d551716129cab83db3c61cd492162196df95f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713785887846208182,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 346d24e136b744e11f51aaf0b32cfabc,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: ea3273f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d66e29130d9c973c5174eff5a88cb844d52b9fa38ad6333085bb66c3bd155697,PodSandboxId:174bda2a37e43f7a336de9740c6f681c2ffbc130efbd2576cd3af6aa0b68fdb6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713785887816866063,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7dbbfc94d550094389016edf0d994af,},Annotations:map[string]string{io.k
ubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c0d3bf49be403d6755298c16ec74a2883dc6b3e3c8efde6968515c7bc280b9c,PodSandboxId:1cd9c0443939a46f232ffda1c018b1c89230a06ff670eadbe540f5e61af211f2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713785887803677671,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-254635,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 739aac7b3eff66515aa3886c2a1e8a1f,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 55596331,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=17ce7b32-fb97-4802-99a5-a29e01bf7223 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4dd7f77a62427       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   a4f9db39efb9d       busybox-fc5497c4f-w6wst
	170cc5dfa96c9       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               1                   98087e18a7cbb       kindnet-jzhvl
	9aee9691b0126       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   f9a6773111945       coredns-7db6d8ff4d-858b8
	9048b9918b26e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   1ab69924cd55a       storage-provisioner
	0479f88c8f22f       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      3 minutes ago       Running             kube-proxy                1                   a042cf023bf4e       kube-proxy-mr7rq
	cab8f0adadfda       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   b376e76e22dd6       etcd-multinode-254635
	a4867fbb06694       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      3 minutes ago       Running             kube-controller-manager   1                   c8783ce123c8f       kube-controller-manager-multinode-254635
	86609797440bf       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      3 minutes ago       Running             kube-scheduler            1                   af59cf4622968       kube-scheduler-multinode-254635
	9631acc3cd800       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      3 minutes ago       Running             kube-apiserver            1                   fd54210addb9c       kube-apiserver-multinode-254635
	8d119fbdf20b5       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   a4e2b504f1ee1       busybox-fc5497c4f-w6wst
	11c87675d112d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   68f35a30ef288       storage-provisioner
	c5ac398b3838a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   0                   424d2c496a3de       coredns-7db6d8ff4d-858b8
	70ea62bce3139       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      10 minutes ago      Exited              kube-proxy                0                   f8b84bce701b5       kube-proxy-mr7rq
	8bd2ac5a2adfe       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      10 minutes ago      Exited              kindnet-cni               0                   5302e172d2439       kindnet-jzhvl
	07a0b4812dd3b       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      10 minutes ago      Exited              kube-scheduler            0                   8eee2070b37b5       kube-scheduler-multinode-254635
	d3b8493457784       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   dde06e3757bdc       etcd-multinode-254635
	d66e29130d9c9       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      10 minutes ago      Exited              kube-controller-manager   0                   174bda2a37e43       kube-controller-manager-multinode-254635
	7c0d3bf49be40       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      10 minutes ago      Exited              kube-apiserver            0                   1cd9c0443939a       kube-apiserver-multinode-254635
	
	
	==> coredns [9aee9691b01261c2c6b2edb4a38d63b27aa00f3c67567e1829e719016c997dc4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56072 - 32659 "HINFO IN 1266163236657129463.659094727874770730. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.029456983s
	
	
	==> coredns [c5ac398b3838a4544e429afdc4ec699c532240a711df54d0ff54f626894fd3c3] <==
	[INFO] 10.244.0.3:53477 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001911859s
	[INFO] 10.244.0.3:43110 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010615s
	[INFO] 10.244.0.3:57365 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000050085s
	[INFO] 10.244.0.3:36553 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001215941s
	[INFO] 10.244.0.3:48767 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000042748s
	[INFO] 10.244.0.3:50007 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000028162s
	[INFO] 10.244.0.3:41159 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000031231s
	[INFO] 10.244.1.2:47904 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184337s
	[INFO] 10.244.1.2:44915 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012092s
	[INFO] 10.244.1.2:41093 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105438s
	[INFO] 10.244.1.2:47657 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105456s
	[INFO] 10.244.0.3:52223 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145371s
	[INFO] 10.244.0.3:41870 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000178913s
	[INFO] 10.244.0.3:41925 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000071881s
	[INFO] 10.244.0.3:39621 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073491s
	[INFO] 10.244.1.2:57097 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139978s
	[INFO] 10.244.1.2:39532 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000227935s
	[INFO] 10.244.1.2:34609 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112344s
	[INFO] 10.244.1.2:33126 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000138362s
	[INFO] 10.244.0.3:60416 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117217s
	[INFO] 10.244.0.3:38982 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00007414s
	[INFO] 10.244.0.3:49474 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000066889s
	[INFO] 10.244.0.3:56944 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000064658s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-254635
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-254635
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437
	                    minikube.k8s.io/name=multinode-254635
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_22T11_38_14_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 11:38:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-254635
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 11:48:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 11:45:01 +0000   Mon, 22 Apr 2024 11:38:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 11:45:01 +0000   Mon, 22 Apr 2024 11:38:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 11:45:01 +0000   Mon, 22 Apr 2024 11:38:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 11:45:01 +0000   Mon, 22 Apr 2024 11:38:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.185
	  Hostname:    multinode-254635
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2f8c402073c446489e978f037232a51b
	  System UUID:                2f8c4020-73c4-4648-9e97-8f037232a51b
	  Boot ID:                    4a2171db-fa95-402b-8c19-b12ba2852d41
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-w6wst                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  kube-system                 coredns-7db6d8ff4d-858b8                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-254635                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-jzhvl                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-254635             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-254635    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-mr7rq                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-254635             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 3m49s                  kube-proxy       
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-254635 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-254635 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-254635 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node multinode-254635 event: Registered Node multinode-254635 in Controller
	  Normal  NodeReady                9m53s                  kubelet          Node multinode-254635 status is now: NodeReady
	  Normal  Starting                 3m55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m55s (x8 over 3m55s)  kubelet          Node multinode-254635 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m55s (x8 over 3m55s)  kubelet          Node multinode-254635 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m55s (x7 over 3m55s)  kubelet          Node multinode-254635 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m38s                  node-controller  Node multinode-254635 event: Registered Node multinode-254635 in Controller
	
	
	Name:               multinode-254635-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-254635-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437
	                    minikube.k8s.io/name=multinode-254635
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_22T11_45_45_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 11:45:45 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-254635-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 11:46:26 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 22 Apr 2024 11:46:16 +0000   Mon, 22 Apr 2024 11:47:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 22 Apr 2024 11:46:16 +0000   Mon, 22 Apr 2024 11:47:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 22 Apr 2024 11:46:16 +0000   Mon, 22 Apr 2024 11:47:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 22 Apr 2024 11:46:16 +0000   Mon, 22 Apr 2024 11:47:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.232
	  Hostname:    multinode-254635-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7f3ae7ac65784a94b9f762b55c80c783
	  System UUID:                7f3ae7ac-6578-4a94-b9f7-62b55c80c783
	  Boot ID:                    288dcc2f-8b41-44e7-a49d-cd4a33ebeeeb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2cvd8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m11s
	  kube-system                 kindnet-4jq8c              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m20s
	  kube-system                 kube-proxy-bkcdv           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m15s                  kube-proxy       
	  Normal  Starting                 3m2s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  9m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m20s (x2 over 9m20s)  kubelet          Node multinode-254635-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s (x2 over 9m20s)  kubelet          Node multinode-254635-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s (x2 over 9m20s)  kubelet          Node multinode-254635-m02 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m20s                  kubelet          Starting kubelet.
	  Normal  NodeReady                9m10s                  kubelet          Node multinode-254635-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m7s (x2 over 3m7s)    kubelet          Node multinode-254635-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m7s (x2 over 3m7s)    kubelet          Node multinode-254635-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m7s (x2 over 3m7s)    kubelet          Node multinode-254635-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m3s                   node-controller  Node multinode-254635-m02 event: Registered Node multinode-254635-m02 in Controller
	  Normal  NodeReady                2m58s                  kubelet          Node multinode-254635-m02 status is now: NodeReady
	  Normal  NodeNotReady             103s                   node-controller  Node multinode-254635-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.055900] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.175538] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.152802] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.305706] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[Apr22 11:38] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.060726] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.642952] systemd-fstab-generator[964]: Ignoring "noauto" option for root device
	[  +0.065049] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.011996] systemd-fstab-generator[1296]: Ignoring "noauto" option for root device
	[  +0.079352] kauditd_printk_skb: 69 callbacks suppressed
	[ +14.227976] systemd-fstab-generator[1489]: Ignoring "noauto" option for root device
	[  +0.091992] kauditd_printk_skb: 21 callbacks suppressed
	[ +32.270710] kauditd_printk_skb: 60 callbacks suppressed
	[Apr22 11:39] kauditd_printk_skb: 12 callbacks suppressed
	[Apr22 11:44] systemd-fstab-generator[2782]: Ignoring "noauto" option for root device
	[  +0.154353] systemd-fstab-generator[2794]: Ignoring "noauto" option for root device
	[  +0.178213] systemd-fstab-generator[2808]: Ignoring "noauto" option for root device
	[  +0.138656] systemd-fstab-generator[2820]: Ignoring "noauto" option for root device
	[  +0.314062] systemd-fstab-generator[2848]: Ignoring "noauto" option for root device
	[  +0.766210] systemd-fstab-generator[2946]: Ignoring "noauto" option for root device
	[  +2.084725] systemd-fstab-generator[3070]: Ignoring "noauto" option for root device
	[Apr22 11:45] kauditd_printk_skb: 184 callbacks suppressed
	[ +12.451316] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.947292] systemd-fstab-generator[3878]: Ignoring "noauto" option for root device
	[ +17.398344] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [cab8f0adadfda9e02f9436c2cee58b7ec4d68640fc4514df80155477e55ffd7c] <==
	{"level":"info","ts":"2024-04-22T11:44:59.057679Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-22T11:44:59.057739Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-22T11:44:59.057971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d switched to configuration voters=(10357203766055541037)"}
	{"level":"info","ts":"2024-04-22T11:44:59.058053Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e1b99ad77146789e","local-member-id":"8fbc2df34e14192d","added-peer-id":"8fbc2df34e14192d","added-peer-peer-urls":["https://192.168.39.185:2380"]}
	{"level":"info","ts":"2024-04-22T11:44:59.058192Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e1b99ad77146789e","local-member-id":"8fbc2df34e14192d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T11:44:59.058242Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T11:44:59.075662Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-22T11:44:59.077479Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8fbc2df34e14192d","initial-advertise-peer-urls":["https://192.168.39.185:2380"],"listen-peer-urls":["https://192.168.39.185:2380"],"advertise-client-urls":["https://192.168.39.185:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.185:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-22T11:44:59.077541Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-22T11:44:59.077283Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.185:2380"}
	{"level":"info","ts":"2024-04-22T11:44:59.077603Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.185:2380"}
	{"level":"info","ts":"2024-04-22T11:45:00.210464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-22T11:45:00.210546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-22T11:45:00.210605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d received MsgPreVoteResp from 8fbc2df34e14192d at term 2"}
	{"level":"info","ts":"2024-04-22T11:45:00.210629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became candidate at term 3"}
	{"level":"info","ts":"2024-04-22T11:45:00.210669Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d received MsgVoteResp from 8fbc2df34e14192d at term 3"}
	{"level":"info","ts":"2024-04-22T11:45:00.210778Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became leader at term 3"}
	{"level":"info","ts":"2024-04-22T11:45:00.210795Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8fbc2df34e14192d elected leader 8fbc2df34e14192d at term 3"}
	{"level":"info","ts":"2024-04-22T11:45:00.221095Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T11:45:00.223304Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-22T11:45:00.221031Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8fbc2df34e14192d","local-member-attributes":"{Name:multinode-254635 ClientURLs:[https://192.168.39.185:2379]}","request-path":"/0/members/8fbc2df34e14192d/attributes","cluster-id":"e1b99ad77146789e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-22T11:45:00.227909Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T11:45:00.228105Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-22T11:45:00.228148Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-22T11:45:00.233426Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.185:2379"}
	
	
	==> etcd [d3b8493457784233fb659b95632fa92367fa72fc86b760ee436e0ad6468bd664] <==
	{"level":"warn","ts":"2024-04-22T11:40:25.520971Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"275.145009ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1814263479110859697 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/multinode-254635-m03\" mod_revision:598 > success:<request_put:<key:\"/registry/minions/multinode-254635-m03\" value_size:2068 >> failure:<request_range:<key:\"/registry/minions/multinode-254635-m03\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-22T11:40:25.523153Z","caller":"traceutil/trace.go:171","msg":"trace[105130172] transaction","detail":"{read_only:false; number_of_response:1; response_revision:600; }","duration":"523.873692ms","start":"2024-04-22T11:40:24.999263Z","end":"2024-04-22T11:40:25.523136Z","steps":["trace[105130172] 'process raft request'  (duration: 523.77523ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T11:40:25.523261Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-22T11:40:24.999254Z","time spent":"523.962586ms","remote":"127.0.0.1:45880","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":42,"response count":0,"response size":2164,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-254635-m03\" mod_revision:598 > success:<request_put:<key:\"/registry/minions/multinode-254635-m03\" value_size:2036 >> failure:<request_range:<key:\"/registry/minions/multinode-254635-m03\" > >"}
	{"level":"info","ts":"2024-04-22T11:40:25.523191Z","caller":"traceutil/trace.go:171","msg":"trace[888699519] transaction","detail":"{read_only:false; response_revision:601; number_of_response:1; }","duration":"127.062437ms","start":"2024-04-22T11:40:25.396117Z","end":"2024-04-22T11:40:25.523179Z","steps":["trace[888699519] 'process raft request'  (duration: 127.003141ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T11:40:25.523422Z","caller":"traceutil/trace.go:171","msg":"trace[703166275] transaction","detail":"{read_only:false; response_revision:600; number_of_response:1; }","duration":"527.435318ms","start":"2024-04-22T11:40:24.995979Z","end":"2024-04-22T11:40:25.523414Z","steps":["trace[703166275] 'process raft request'  (duration: 249.508369ms)","trace[703166275] 'compare'  (duration: 274.936359ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-22T11:40:25.523496Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-22T11:40:24.99597Z","time spent":"527.501906ms","remote":"127.0.0.1:45880","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2114,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-254635-m03\" mod_revision:598 > success:<request_put:<key:\"/registry/minions/multinode-254635-m03\" value_size:2068 >> failure:<request_range:<key:\"/registry/minions/multinode-254635-m03\" > >"}
	{"level":"info","ts":"2024-04-22T11:40:25.523661Z","caller":"traceutil/trace.go:171","msg":"trace[2071054875] linearizableReadLoop","detail":"{readStateIndex:642; appliedIndex:640; }","duration":"524.328412ms","start":"2024-04-22T11:40:24.999319Z","end":"2024-04-22T11:40:25.523647Z","steps":["trace[2071054875] 'read index received'  (duration: 48.541315ms)","trace[2071054875] 'applied index is now lower than readState.Index'  (duration: 475.786388ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-22T11:40:25.523833Z","caller":"traceutil/trace.go:171","msg":"trace[621235512] transaction","detail":"{read_only:false; number_of_response:1; response_revision:600; }","duration":"524.484699ms","start":"2024-04-22T11:40:24.999339Z","end":"2024-04-22T11:40:25.523824Z","steps":["trace[621235512] 'process raft request'  (duration: 523.751174ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T11:40:25.523875Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-22T11:40:24.999336Z","time spent":"524.513825ms","remote":"127.0.0.1:45880","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":42,"response count":0,"response size":2164,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-254635-m03\" mod_revision:598 > success:<request_put:<key:\"/registry/minions/multinode-254635-m03\" value_size:2033 >> failure:<request_range:<key:\"/registry/minions/multinode-254635-m03\" > >"}
	{"level":"warn","ts":"2024-04-22T11:40:25.524031Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"524.703984ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-22T11:40:25.52409Z","caller":"traceutil/trace.go:171","msg":"trace[327369696] range","detail":"{range_begin:/registry/limitranges/kube-system/; range_end:/registry/limitranges/kube-system0; response_count:0; response_revision:601; }","duration":"524.77683ms","start":"2024-04-22T11:40:24.9993Z","end":"2024-04-22T11:40:25.524077Z","steps":["trace[327369696] 'agreement among raft nodes before linearized reading'  (duration: 524.672554ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T11:40:25.524118Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-22T11:40:24.999292Z","time spent":"524.813771ms","remote":"127.0.0.1:45854","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":0,"response size":29,"request content":"key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" "}
	{"level":"warn","ts":"2024-04-22T11:40:25.524245Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"524.793893ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/kube-node-lease/\" range_end:\"/registry/resourcequotas/kube-node-lease0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-22T11:40:25.524289Z","caller":"traceutil/trace.go:171","msg":"trace[1580525120] range","detail":"{range_begin:/registry/resourcequotas/kube-node-lease/; range_end:/registry/resourcequotas/kube-node-lease0; response_count:0; response_revision:601; }","duration":"524.844997ms","start":"2024-04-22T11:40:24.999438Z","end":"2024-04-22T11:40:25.524283Z","steps":["trace[1580525120] 'agreement among raft nodes before linearized reading'  (duration: 524.790183ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T11:40:25.524309Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-22T11:40:24.999433Z","time spent":"524.86999ms","remote":"127.0.0.1:45804","response type":"/etcdserverpb.KV/Range","request count":0,"request size":86,"response count":0,"response size":29,"request content":"key:\"/registry/resourcequotas/kube-node-lease/\" range_end:\"/registry/resourcequotas/kube-node-lease0\" "}
	{"level":"info","ts":"2024-04-22T11:43:22.682608Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-22T11:43:22.688249Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-254635","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.185:2380"],"advertise-client-urls":["https://192.168.39.185:2379"]}
	{"level":"warn","ts":"2024-04-22T11:43:22.688422Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T11:43:22.688665Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T11:43:22.759992Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.185:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T11:43:22.760056Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.185:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-22T11:43:22.761619Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8fbc2df34e14192d","current-leader-member-id":"8fbc2df34e14192d"}
	{"level":"info","ts":"2024-04-22T11:43:22.765147Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.185:2380"}
	{"level":"info","ts":"2024-04-22T11:43:22.76533Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.185:2380"}
	{"level":"info","ts":"2024-04-22T11:43:22.765374Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-254635","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.185:2380"],"advertise-client-urls":["https://192.168.39.185:2379"]}
	
	
	==> kernel <==
	 11:48:52 up 11 min,  0 users,  load average: 0.19, 0.27, 0.17
	Linux multinode-254635 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [170cc5dfa96c9013a5da4901c9998afe1e59779fda2e4d36d4697b12c1e7dc34] <==
	I0422 11:47:43.666500       1 main.go:250] Node multinode-254635-m02 has CIDR [10.244.1.0/24] 
	I0422 11:47:53.674360       1 main.go:223] Handling node with IPs: map[192.168.39.185:{}]
	I0422 11:47:53.674409       1 main.go:227] handling current node
	I0422 11:47:53.674423       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0422 11:47:53.674429       1 main.go:250] Node multinode-254635-m02 has CIDR [10.244.1.0/24] 
	I0422 11:48:03.680485       1 main.go:223] Handling node with IPs: map[192.168.39.185:{}]
	I0422 11:48:03.680536       1 main.go:227] handling current node
	I0422 11:48:03.680548       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0422 11:48:03.680651       1 main.go:250] Node multinode-254635-m02 has CIDR [10.244.1.0/24] 
	I0422 11:48:13.685988       1 main.go:223] Handling node with IPs: map[192.168.39.185:{}]
	I0422 11:48:13.686031       1 main.go:227] handling current node
	I0422 11:48:13.686258       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0422 11:48:13.686293       1 main.go:250] Node multinode-254635-m02 has CIDR [10.244.1.0/24] 
	I0422 11:48:23.692812       1 main.go:223] Handling node with IPs: map[192.168.39.185:{}]
	I0422 11:48:23.692975       1 main.go:227] handling current node
	I0422 11:48:23.693007       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0422 11:48:23.693030       1 main.go:250] Node multinode-254635-m02 has CIDR [10.244.1.0/24] 
	I0422 11:48:33.708217       1 main.go:223] Handling node with IPs: map[192.168.39.185:{}]
	I0422 11:48:33.708436       1 main.go:227] handling current node
	I0422 11:48:33.708487       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0422 11:48:33.708506       1 main.go:250] Node multinode-254635-m02 has CIDR [10.244.1.0/24] 
	I0422 11:48:43.719443       1 main.go:223] Handling node with IPs: map[192.168.39.185:{}]
	I0422 11:48:43.719677       1 main.go:227] handling current node
	I0422 11:48:43.719838       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0422 11:48:43.719868       1 main.go:250] Node multinode-254635-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [8bd2ac5a2adfeb536099f59cf363bbbde81f2e3983e1d6c18a1f6565651e8ed9] <==
	I0422 11:42:38.963356       1 main.go:250] Node multinode-254635-m03 has CIDR [10.244.3.0/24] 
	I0422 11:42:48.968852       1 main.go:223] Handling node with IPs: map[192.168.39.185:{}]
	I0422 11:42:48.969123       1 main.go:227] handling current node
	I0422 11:42:48.969224       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0422 11:42:48.969332       1 main.go:250] Node multinode-254635-m02 has CIDR [10.244.1.0/24] 
	I0422 11:42:48.969606       1 main.go:223] Handling node with IPs: map[192.168.39.75:{}]
	I0422 11:42:48.969642       1 main.go:250] Node multinode-254635-m03 has CIDR [10.244.3.0/24] 
	I0422 11:42:58.978875       1 main.go:223] Handling node with IPs: map[192.168.39.185:{}]
	I0422 11:42:58.979106       1 main.go:227] handling current node
	I0422 11:42:58.979152       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0422 11:42:58.979173       1 main.go:250] Node multinode-254635-m02 has CIDR [10.244.1.0/24] 
	I0422 11:42:58.979303       1 main.go:223] Handling node with IPs: map[192.168.39.75:{}]
	I0422 11:42:58.979324       1 main.go:250] Node multinode-254635-m03 has CIDR [10.244.3.0/24] 
	I0422 11:43:08.984370       1 main.go:223] Handling node with IPs: map[192.168.39.185:{}]
	I0422 11:43:08.984465       1 main.go:227] handling current node
	I0422 11:43:08.984492       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0422 11:43:08.984518       1 main.go:250] Node multinode-254635-m02 has CIDR [10.244.1.0/24] 
	I0422 11:43:08.984636       1 main.go:223] Handling node with IPs: map[192.168.39.75:{}]
	I0422 11:43:08.984657       1 main.go:250] Node multinode-254635-m03 has CIDR [10.244.3.0/24] 
	I0422 11:43:18.991778       1 main.go:223] Handling node with IPs: map[192.168.39.185:{}]
	I0422 11:43:18.991933       1 main.go:227] handling current node
	I0422 11:43:18.991963       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0422 11:43:18.991982       1 main.go:250] Node multinode-254635-m02 has CIDR [10.244.1.0/24] 
	I0422 11:43:18.992103       1 main.go:223] Handling node with IPs: map[192.168.39.75:{}]
	I0422 11:43:18.992129       1 main.go:250] Node multinode-254635-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [7c0d3bf49be403d6755298c16ec74a2883dc6b3e3c8efde6968515c7bc280b9c] <==
	I0422 11:43:22.687662       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0422 11:43:22.691079       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.712633       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.712830       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.712906       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.712968       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.713012       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.713094       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.713156       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.713206       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.713339       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.713528       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.713635       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.713848       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.713971       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.714031       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.714091       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.714142       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.714194       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.714246       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.714297       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.714363       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.714417       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.714488       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 11:43:22.714552       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [9631acc3cd8008ea734f6268932041e8e0e08d96b2532faa4be1d1e017eae954] <==
	I0422 11:45:01.628170       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0422 11:45:01.646641       1 aggregator.go:165] initial CRD sync complete...
	I0422 11:45:01.646731       1 autoregister_controller.go:141] Starting autoregister controller
	I0422 11:45:01.646740       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0422 11:45:01.649587       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0422 11:45:01.654365       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0422 11:45:01.654423       1 policy_source.go:224] refreshing policies
	I0422 11:45:01.685420       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0422 11:45:01.727572       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0422 11:45:01.727639       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0422 11:45:01.727647       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0422 11:45:01.728325       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0422 11:45:01.730640       1 shared_informer.go:320] Caches are synced for configmaps
	I0422 11:45:01.734226       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0422 11:45:01.734681       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0422 11:45:01.740270       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0422 11:45:01.747403       1 cache.go:39] Caches are synced for autoregister controller
	I0422 11:45:02.559308       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0422 11:45:03.840024       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0422 11:45:03.980507       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0422 11:45:03.997770       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0422 11:45:04.091590       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0422 11:45:04.101431       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0422 11:45:14.698505       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0422 11:45:14.746108       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [a4867fbb06694830cf22ead69bf0ddd10a883b530a624a5e9b3b78fa115b0bc2] <==
	I0422 11:45:45.388621       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-254635-m02\" does not exist"
	I0422 11:45:45.401068       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-254635-m02" podCIDRs=["10.244.1.0/24"]
	I0422 11:45:47.287792       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="173.663µs"
	I0422 11:45:47.330115       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.301µs"
	I0422 11:45:47.342257       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.482µs"
	I0422 11:45:47.357514       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.082µs"
	I0422 11:45:47.364980       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.347µs"
	I0422 11:45:47.370784       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="119.382µs"
	I0422 11:45:54.631951       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-254635-m02"
	I0422 11:45:54.662344       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.643µs"
	I0422 11:45:54.678142       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.786µs"
	I0422 11:45:57.696630       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.068468ms"
	I0422 11:45:57.697476       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="100.976µs"
	I0422 11:46:14.343813       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-254635-m02"
	I0422 11:46:15.681229       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-254635-m02"
	I0422 11:46:15.681555       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-254635-m03\" does not exist"
	I0422 11:46:15.707872       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-254635-m03" podCIDRs=["10.244.2.0/24"]
	I0422 11:46:25.459145       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-254635-m03"
	I0422 11:46:30.861563       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-254635-m02"
	I0422 11:46:54.689795       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-fsg5v"
	I0422 11:46:54.728868       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-fsg5v"
	I0422 11:46:54.729139       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-8xngk"
	I0422 11:46:54.771479       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-8xngk"
	I0422 11:47:09.717213       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.98875ms"
	I0422 11:47:09.717406       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.611µs"
	
	
	==> kube-controller-manager [d66e29130d9c973c5174eff5a88cb844d52b9fa38ad6333085bb66c3bd155697] <==
	I0422 11:39:32.444416       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-254635-m02\" does not exist"
	I0422 11:39:32.458217       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-254635-m02" podCIDRs=["10.244.1.0/24"]
	I0422 11:39:35.905969       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-254635-m02"
	I0422 11:39:42.562419       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-254635-m02"
	I0422 11:39:44.796369       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.108493ms"
	I0422 11:39:44.820377       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.94515ms"
	I0422 11:39:44.836954       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.479372ms"
	I0422 11:39:44.837099       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="88.2µs"
	I0422 11:39:48.335005       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.519124ms"
	I0422 11:39:48.335581       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.085µs"
	I0422 11:39:48.518329       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.003965ms"
	I0422 11:39:48.519176       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.724µs"
	I0422 11:40:24.988988       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-254635-m03\" does not exist"
	I0422 11:40:24.989079       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-254635-m02"
	I0422 11:40:25.671074       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-254635-m03" podCIDRs=["10.244.2.0/24"]
	I0422 11:40:25.927612       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-254635-m03"
	I0422 11:40:34.119904       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-254635-m02"
	I0422 11:41:05.380402       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-254635-m02"
	I0422 11:41:06.480929       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-254635-m03\" does not exist"
	I0422 11:41:06.480997       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-254635-m02"
	I0422 11:41:06.494917       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-254635-m03" podCIDRs=["10.244.3.0/24"]
	I0422 11:41:16.043801       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-254635-m02"
	I0422 11:41:55.983460       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-254635-m03"
	I0422 11:41:56.041678       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.133166ms"
	I0422 11:41:56.041940       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.289µs"
	
	
	==> kube-proxy [0479f88c8f22fadfd7ca5c88a541baead1e410717b9d83c5f6e8c9c81026cd90] <==
	I0422 11:45:02.733443       1 server_linux.go:69] "Using iptables proxy"
	I0422 11:45:02.747991       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.185"]
	I0422 11:45:02.873869       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 11:45:02.873972       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 11:45:02.873992       1 server_linux.go:165] "Using iptables Proxier"
	I0422 11:45:02.880012       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 11:45:02.880293       1 server.go:872] "Version info" version="v1.30.0"
	I0422 11:45:02.880342       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 11:45:02.887348       1 config.go:192] "Starting service config controller"
	I0422 11:45:02.887393       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 11:45:02.887419       1 config.go:101] "Starting endpoint slice config controller"
	I0422 11:45:02.887423       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 11:45:02.887458       1 config.go:319] "Starting node config controller"
	I0422 11:45:02.887488       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 11:45:02.987818       1 shared_informer.go:320] Caches are synced for node config
	I0422 11:45:02.987872       1 shared_informer.go:320] Caches are synced for service config
	I0422 11:45:02.987891       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [70ea62bce313978f143670020dbcaed41edb5279e812840d18fa210fbf68433d] <==
	I0422 11:38:28.565657       1 server_linux.go:69] "Using iptables proxy"
	I0422 11:38:28.577360       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.185"]
	I0422 11:38:28.629123       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 11:38:28.629185       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 11:38:28.629201       1 server_linux.go:165] "Using iptables Proxier"
	I0422 11:38:28.632427       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 11:38:28.632783       1 server.go:872] "Version info" version="v1.30.0"
	I0422 11:38:28.633026       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 11:38:28.634161       1 config.go:192] "Starting service config controller"
	I0422 11:38:28.634209       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 11:38:28.634232       1 config.go:101] "Starting endpoint slice config controller"
	I0422 11:38:28.634235       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 11:38:28.634802       1 config.go:319] "Starting node config controller"
	I0422 11:38:28.634834       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 11:38:28.734287       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0422 11:38:28.734472       1 shared_informer.go:320] Caches are synced for service config
	I0422 11:38:28.735053       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [07a0b4812dd3b1c6a7f0c82617d26c0ccc45b8b8e1d30d6c318f8bda12735f0b] <==
	E0422 11:38:11.518868       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0422 11:38:11.537909       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0422 11:38:11.538024       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0422 11:38:11.558372       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0422 11:38:11.558428       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0422 11:38:11.669188       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0422 11:38:11.671211       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0422 11:38:11.713377       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0422 11:38:11.713437       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0422 11:38:11.720441       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0422 11:38:11.722328       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0422 11:38:11.740314       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0422 11:38:11.740791       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0422 11:38:11.751633       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0422 11:38:11.751790       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0422 11:38:11.812154       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0422 11:38:11.812323       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0422 11:38:11.845280       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0422 11:38:11.845540       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0422 11:38:11.849519       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0422 11:38:11.849791       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0422 11:38:11.954544       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0422 11:38:11.954597       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0422 11:38:13.821796       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0422 11:43:22.683460       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [86609797440bf5cb0ebb23673aadaa0c528eea783ee8792fab8e9c928d17a31c] <==
	I0422 11:44:59.763394       1 serving.go:380] Generated self-signed cert in-memory
	W0422 11:45:01.579123       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0422 11:45:01.579263       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0422 11:45:01.579298       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0422 11:45:01.579403       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0422 11:45:01.637396       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0422 11:45:01.637889       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 11:45:01.649918       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0422 11:45:01.652761       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0422 11:45:01.652782       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0422 11:45:01.652791       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0422 11:45:01.754772       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 22 11:45:01 multinode-254635 kubelet[3078]: I0422 11:45:01.793620    3078 topology_manager.go:215] "Topology Admit Handler" podUID="82216f3b-f366-4b55-893a-8f7c1b59372b" podNamespace="kube-system" podName="storage-provisioner"
	Apr 22 11:45:01 multinode-254635 kubelet[3078]: I0422 11:45:01.793677    3078 topology_manager.go:215] "Topology Admit Handler" podUID="ec3be7d9-b316-43ba-8c05-c028f530c07e" podNamespace="default" podName="busybox-fc5497c4f-w6wst"
	Apr 22 11:45:01 multinode-254635 kubelet[3078]: I0422 11:45:01.811639    3078 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 22 11:45:01 multinode-254635 kubelet[3078]: I0422 11:45:01.885534    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/82216f3b-f366-4b55-893a-8f7c1b59372b-tmp\") pod \"storage-provisioner\" (UID: \"82216f3b-f366-4b55-893a-8f7c1b59372b\") " pod="kube-system/storage-provisioner"
	Apr 22 11:45:01 multinode-254635 kubelet[3078]: I0422 11:45:01.886280    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/848b349d-906a-411c-a60b-b559d47ad2a7-lib-modules\") pod \"kindnet-jzhvl\" (UID: \"848b349d-906a-411c-a60b-b559d47ad2a7\") " pod="kube-system/kindnet-jzhvl"
	Apr 22 11:45:01 multinode-254635 kubelet[3078]: I0422 11:45:01.886398    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/848b349d-906a-411c-a60b-b559d47ad2a7-cni-cfg\") pod \"kindnet-jzhvl\" (UID: \"848b349d-906a-411c-a60b-b559d47ad2a7\") " pod="kube-system/kindnet-jzhvl"
	Apr 22 11:45:01 multinode-254635 kubelet[3078]: I0422 11:45:01.886444    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/848b349d-906a-411c-a60b-b559d47ad2a7-xtables-lock\") pod \"kindnet-jzhvl\" (UID: \"848b349d-906a-411c-a60b-b559d47ad2a7\") " pod="kube-system/kindnet-jzhvl"
	Apr 22 11:45:01 multinode-254635 kubelet[3078]: I0422 11:45:01.886491    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a91e327-1478-4e50-9993-de3d5406efaa-lib-modules\") pod \"kube-proxy-mr7rq\" (UID: \"3a91e327-1478-4e50-9993-de3d5406efaa\") " pod="kube-system/kube-proxy-mr7rq"
	Apr 22 11:45:01 multinode-254635 kubelet[3078]: I0422 11:45:01.886561    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a91e327-1478-4e50-9993-de3d5406efaa-xtables-lock\") pod \"kube-proxy-mr7rq\" (UID: \"3a91e327-1478-4e50-9993-de3d5406efaa\") " pod="kube-system/kube-proxy-mr7rq"
	Apr 22 11:45:09 multinode-254635 kubelet[3078]: I0422 11:45:09.239575    3078 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 22 11:45:57 multinode-254635 kubelet[3078]: E0422 11:45:57.841186    3078 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 11:45:57 multinode-254635 kubelet[3078]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 11:45:57 multinode-254635 kubelet[3078]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 11:45:57 multinode-254635 kubelet[3078]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 11:45:57 multinode-254635 kubelet[3078]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 11:46:57 multinode-254635 kubelet[3078]: E0422 11:46:57.842253    3078 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 11:46:57 multinode-254635 kubelet[3078]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 11:46:57 multinode-254635 kubelet[3078]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 11:46:57 multinode-254635 kubelet[3078]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 11:46:57 multinode-254635 kubelet[3078]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 11:47:57 multinode-254635 kubelet[3078]: E0422 11:47:57.852516    3078 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 11:47:57 multinode-254635 kubelet[3078]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 11:47:57 multinode-254635 kubelet[3078]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 11:47:57 multinode-254635 kubelet[3078]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 11:47:57 multinode-254635 kubelet[3078]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 11:48:51.510788   48531 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18711-7633/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-254635 -n multinode-254635
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-254635 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.52s)

                                                
                                    
x
+
TestPreload (339.25s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-421473 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-421473 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (3m16.436314608s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-421473 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-421473 image pull gcr.io/k8s-minikube/busybox: (2.700321562s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-421473
E0422 11:56:17.643955   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
E0422 11:56:57.327188   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-421473: exit status 82 (2m0.485700151s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-421473"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-421473 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-04-22 11:57:58.638178192 +0000 UTC m=+4815.253768610
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-421473 -n test-preload-421473
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-421473 -n test-preload-421473: exit status 3 (18.527358687s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 11:58:17.161089   51609 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host
	E0422 11:58:17.161109   51609 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.111:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-421473" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-421473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-421473
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-421473: (1.099375804s)
--- FAIL: TestPreload (339.25s)

                                                
                                    
x
+
TestKubernetesUpgrade (408.64s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-643419 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-643419 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m32.894776914s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-643419] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18711-7633/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18711-7633/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-643419" primary control-plane node in "kubernetes-upgrade-643419" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 12:03:40.666033   57781 out.go:291] Setting OutFile to fd 1 ...
	I0422 12:03:40.666145   57781 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 12:03:40.666151   57781 out.go:304] Setting ErrFile to fd 2...
	I0422 12:03:40.666155   57781 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 12:03:40.666352   57781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
	I0422 12:03:40.666881   57781 out.go:298] Setting JSON to false
	I0422 12:03:40.667774   57781 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6364,"bootTime":1713781057,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 12:03:40.667835   57781 start.go:139] virtualization: kvm guest
	I0422 12:03:40.670867   57781 out.go:177] * [kubernetes-upgrade-643419] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 12:03:40.672874   57781 out.go:177]   - MINIKUBE_LOCATION=18711
	I0422 12:03:40.672779   57781 notify.go:220] Checking for updates...
	I0422 12:03:40.674658   57781 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 12:03:40.676215   57781 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18711-7633/kubeconfig
	I0422 12:03:40.677699   57781 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 12:03:40.679165   57781 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 12:03:40.680833   57781 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 12:03:40.682919   57781 config.go:182] Loaded profile config "NoKubernetes-483459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0422 12:03:40.683070   57781 config.go:182] Loaded profile config "cert-expiration-454029": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 12:03:40.683215   57781 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 12:03:40.721141   57781 out.go:177] * Using the kvm2 driver based on user configuration
	I0422 12:03:40.722675   57781 start.go:297] selected driver: kvm2
	I0422 12:03:40.722689   57781 start.go:901] validating driver "kvm2" against <nil>
	I0422 12:03:40.722701   57781 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 12:03:40.723629   57781 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 12:03:40.723722   57781 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18711-7633/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 12:03:40.746814   57781 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 12:03:40.746860   57781 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0422 12:03:40.747120   57781 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0422 12:03:40.747166   57781 cni.go:84] Creating CNI manager for ""
	I0422 12:03:40.747180   57781 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 12:03:40.747189   57781 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0422 12:03:40.747249   57781 start.go:340] cluster config:
	{Name:kubernetes-upgrade-643419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-643419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 12:03:40.747342   57781 iso.go:125] acquiring lock: {Name:mkb6ac9fd17ffabc92a94047094130aad6203a95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 12:03:40.749640   57781 out.go:177] * Starting "kubernetes-upgrade-643419" primary control-plane node in "kubernetes-upgrade-643419" cluster
	I0422 12:03:40.751344   57781 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0422 12:03:40.751394   57781 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0422 12:03:40.751412   57781 cache.go:56] Caching tarball of preloaded images
	I0422 12:03:40.751496   57781 preload.go:173] Found /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 12:03:40.751510   57781 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0422 12:03:40.751647   57781 profile.go:143] Saving config to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/config.json ...
	I0422 12:03:40.751674   57781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/config.json: {Name:mk0f3bad42c49bf918037bd6d1a1bbb065734efc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 12:03:40.751830   57781 start.go:360] acquireMachinesLock for kubernetes-upgrade-643419: {Name:mk5cb9b294e703b264c1f97ac968ffd01e93b576 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 12:03:40.751876   57781 start.go:364] duration metric: took 27.181µs to acquireMachinesLock for "kubernetes-upgrade-643419"
	I0422 12:03:40.751899   57781 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-643419 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20
.0 ClusterName:kubernetes-upgrade-643419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 12:03:40.751978   57781 start.go:125] createHost starting for "" (driver="kvm2")
	I0422 12:03:40.753946   57781 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0422 12:03:40.754135   57781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 12:03:40.754179   57781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 12:03:40.770892   57781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42615
	I0422 12:03:40.771395   57781 main.go:141] libmachine: () Calling .GetVersion
	I0422 12:03:40.771890   57781 main.go:141] libmachine: Using API Version  1
	I0422 12:03:40.771910   57781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 12:03:40.772263   57781 main.go:141] libmachine: () Calling .GetMachineName
	I0422 12:03:40.772451   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetMachineName
	I0422 12:03:40.772596   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .DriverName
	I0422 12:03:40.772716   57781 start.go:159] libmachine.API.Create for "kubernetes-upgrade-643419" (driver="kvm2")
	I0422 12:03:40.772811   57781 client.go:168] LocalClient.Create starting
	I0422 12:03:40.772855   57781 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem
	I0422 12:03:40.772896   57781 main.go:141] libmachine: Decoding PEM data...
	I0422 12:03:40.772918   57781 main.go:141] libmachine: Parsing certificate...
	I0422 12:03:40.772997   57781 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem
	I0422 12:03:40.773023   57781 main.go:141] libmachine: Decoding PEM data...
	I0422 12:03:40.773035   57781 main.go:141] libmachine: Parsing certificate...
	I0422 12:03:40.773056   57781 main.go:141] libmachine: Running pre-create checks...
	I0422 12:03:40.773067   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .PreCreateCheck
	I0422 12:03:40.773417   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetConfigRaw
	I0422 12:03:40.773833   57781 main.go:141] libmachine: Creating machine...
	I0422 12:03:40.773849   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .Create
	I0422 12:03:40.774000   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Creating KVM machine...
	I0422 12:03:40.775404   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | found existing default KVM network
	I0422 12:03:40.776466   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | I0422 12:03:40.776312   57828 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b4:2a:ef} reservation:<nil>}
	I0422 12:03:40.777761   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | I0422 12:03:40.777657   57828 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010fe10}
	I0422 12:03:40.777797   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | created network xml: 
	I0422 12:03:40.777819   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | <network>
	I0422 12:03:40.777835   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG |   <name>mk-kubernetes-upgrade-643419</name>
	I0422 12:03:40.777844   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG |   <dns enable='no'/>
	I0422 12:03:40.777855   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG |   
	I0422 12:03:40.777866   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0422 12:03:40.777878   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG |     <dhcp>
	I0422 12:03:40.777897   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0422 12:03:40.777947   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG |     </dhcp>
	I0422 12:03:40.777975   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG |   </ip>
	I0422 12:03:40.777985   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG |   
	I0422 12:03:40.777991   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | </network>
	I0422 12:03:40.777998   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | 
	I0422 12:03:40.783824   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | trying to create private KVM network mk-kubernetes-upgrade-643419 192.168.50.0/24...
	I0422 12:03:40.861331   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Setting up store path in /home/jenkins/minikube-integration/18711-7633/.minikube/machines/kubernetes-upgrade-643419 ...
	I0422 12:03:40.861365   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | private KVM network mk-kubernetes-upgrade-643419 192.168.50.0/24 created
	I0422 12:03:40.861378   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Building disk image from file:///home/jenkins/minikube-integration/18711-7633/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0422 12:03:40.861409   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Downloading /home/jenkins/minikube-integration/18711-7633/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18711-7633/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0422 12:03:40.861436   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | I0422 12:03:40.861278   57828 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 12:03:41.105906   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | I0422 12:03:41.105792   57828 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/kubernetes-upgrade-643419/id_rsa...
	I0422 12:03:41.384438   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | I0422 12:03:41.384323   57828 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/kubernetes-upgrade-643419/kubernetes-upgrade-643419.rawdisk...
	I0422 12:03:41.384467   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | Writing magic tar header
	I0422 12:03:41.384488   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | Writing SSH key tar header
	I0422 12:03:41.384502   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | I0422 12:03:41.384466   57828 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18711-7633/.minikube/machines/kubernetes-upgrade-643419 ...
	I0422 12:03:41.384636   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/kubernetes-upgrade-643419
	I0422 12:03:41.384679   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633/.minikube/machines/kubernetes-upgrade-643419 (perms=drwx------)
	I0422 12:03:41.384693   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633/.minikube/machines
	I0422 12:03:41.384708   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 12:03:41.384723   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633
	I0422 12:03:41.384750   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633/.minikube/machines (perms=drwxr-xr-x)
	I0422 12:03:41.384809   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633/.minikube (perms=drwxr-xr-x)
	I0422 12:03:41.384824   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0422 12:03:41.384837   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | Checking permissions on dir: /home/jenkins
	I0422 12:03:41.384849   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | Checking permissions on dir: /home
	I0422 12:03:41.384881   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | Skipping /home - not owner
	I0422 12:03:41.384928   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633 (perms=drwxrwxr-x)
	I0422 12:03:41.384949   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0422 12:03:41.384960   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0422 12:03:41.384968   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Creating domain...
	I0422 12:03:41.385643   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) define libvirt domain using xml: 
	I0422 12:03:41.385661   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) <domain type='kvm'>
	I0422 12:03:41.385689   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)   <name>kubernetes-upgrade-643419</name>
	I0422 12:03:41.385711   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)   <memory unit='MiB'>2200</memory>
	I0422 12:03:41.385723   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)   <vcpu>2</vcpu>
	I0422 12:03:41.385733   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)   <features>
	I0422 12:03:41.385742   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)     <acpi/>
	I0422 12:03:41.385753   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)     <apic/>
	I0422 12:03:41.385772   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)     <pae/>
	I0422 12:03:41.385787   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)     
	I0422 12:03:41.385799   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)   </features>
	I0422 12:03:41.385809   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)   <cpu mode='host-passthrough'>
	I0422 12:03:41.385820   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)   
	I0422 12:03:41.385831   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)   </cpu>
	I0422 12:03:41.385841   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)   <os>
	I0422 12:03:41.385864   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)     <type>hvm</type>
	I0422 12:03:41.385877   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)     <boot dev='cdrom'/>
	I0422 12:03:41.385896   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)     <boot dev='hd'/>
	I0422 12:03:41.385910   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)     <bootmenu enable='no'/>
	I0422 12:03:41.385918   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)   </os>
	I0422 12:03:41.385922   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)   <devices>
	I0422 12:03:41.385934   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)     <disk type='file' device='cdrom'>
	I0422 12:03:41.385951   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)       <source file='/home/jenkins/minikube-integration/18711-7633/.minikube/machines/kubernetes-upgrade-643419/boot2docker.iso'/>
	I0422 12:03:41.385976   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)       <target dev='hdc' bus='scsi'/>
	I0422 12:03:41.385992   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)       <readonly/>
	I0422 12:03:41.386004   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)     </disk>
	I0422 12:03:41.386017   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)     <disk type='file' device='disk'>
	I0422 12:03:41.386028   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0422 12:03:41.386038   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)       <source file='/home/jenkins/minikube-integration/18711-7633/.minikube/machines/kubernetes-upgrade-643419/kubernetes-upgrade-643419.rawdisk'/>
	I0422 12:03:41.386050   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)       <target dev='hda' bus='virtio'/>
	I0422 12:03:41.386065   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)     </disk>
	I0422 12:03:41.386079   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)     <interface type='network'>
	I0422 12:03:41.386092   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)       <source network='mk-kubernetes-upgrade-643419'/>
	I0422 12:03:41.386104   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)       <model type='virtio'/>
	I0422 12:03:41.386114   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)     </interface>
	I0422 12:03:41.386124   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)     <interface type='network'>
	I0422 12:03:41.386136   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)       <source network='default'/>
	I0422 12:03:41.386158   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)       <model type='virtio'/>
	I0422 12:03:41.386179   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)     </interface>
	I0422 12:03:41.386192   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)     <serial type='pty'>
	I0422 12:03:41.386204   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)       <target port='0'/>
	I0422 12:03:41.386214   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)     </serial>
	I0422 12:03:41.386225   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)     <console type='pty'>
	I0422 12:03:41.386237   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)       <target type='serial' port='0'/>
	I0422 12:03:41.386247   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)     </console>
	I0422 12:03:41.386257   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)     <rng model='virtio'>
	I0422 12:03:41.386272   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)       <backend model='random'>/dev/random</backend>
	I0422 12:03:41.386283   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)     </rng>
	I0422 12:03:41.386294   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)     
	I0422 12:03:41.386302   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)     
	I0422 12:03:41.386313   57781 main.go:141] libmachine: (kubernetes-upgrade-643419)   </devices>
	I0422 12:03:41.386324   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) </domain>
	I0422 12:03:41.386339   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) 
	I0422 12:03:41.390503   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:19:09:06 in network default
	I0422 12:03:41.391016   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Ensuring networks are active...
	I0422 12:03:41.391034   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:03:41.391641   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Ensuring network default is active
	I0422 12:03:41.391985   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Ensuring network mk-kubernetes-upgrade-643419 is active
	I0422 12:03:41.392498   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Getting domain xml...
	I0422 12:03:41.393211   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Creating domain...
	I0422 12:03:42.716534   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Waiting to get IP...
	I0422 12:03:42.717512   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:03:42.717995   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | unable to find current IP address of domain kubernetes-upgrade-643419 in network mk-kubernetes-upgrade-643419
	I0422 12:03:42.718024   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | I0422 12:03:42.717948   57828 retry.go:31] will retry after 226.515916ms: waiting for machine to come up
	I0422 12:03:42.946450   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:03:42.946917   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | unable to find current IP address of domain kubernetes-upgrade-643419 in network mk-kubernetes-upgrade-643419
	I0422 12:03:42.947007   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | I0422 12:03:42.946835   57828 retry.go:31] will retry after 387.08693ms: waiting for machine to come up
	I0422 12:03:43.335481   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:03:43.336049   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | unable to find current IP address of domain kubernetes-upgrade-643419 in network mk-kubernetes-upgrade-643419
	I0422 12:03:43.336077   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | I0422 12:03:43.335994   57828 retry.go:31] will retry after 323.075548ms: waiting for machine to come up
	I0422 12:03:43.660432   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:03:43.766528   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | unable to find current IP address of domain kubernetes-upgrade-643419 in network mk-kubernetes-upgrade-643419
	I0422 12:03:43.766569   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | I0422 12:03:43.766453   57828 retry.go:31] will retry after 471.466945ms: waiting for machine to come up
	I0422 12:03:44.239368   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:03:44.239984   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | unable to find current IP address of domain kubernetes-upgrade-643419 in network mk-kubernetes-upgrade-643419
	I0422 12:03:44.240008   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | I0422 12:03:44.239938   57828 retry.go:31] will retry after 672.608926ms: waiting for machine to come up
	I0422 12:03:44.913867   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:03:44.914501   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | unable to find current IP address of domain kubernetes-upgrade-643419 in network mk-kubernetes-upgrade-643419
	I0422 12:03:44.914526   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | I0422 12:03:44.914450   57828 retry.go:31] will retry after 700.688874ms: waiting for machine to come up
	I0422 12:03:45.617300   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:03:45.617848   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | unable to find current IP address of domain kubernetes-upgrade-643419 in network mk-kubernetes-upgrade-643419
	I0422 12:03:45.617876   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | I0422 12:03:45.617775   57828 retry.go:31] will retry after 980.281828ms: waiting for machine to come up
	I0422 12:03:46.599534   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:03:46.600060   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | unable to find current IP address of domain kubernetes-upgrade-643419 in network mk-kubernetes-upgrade-643419
	I0422 12:03:46.600092   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | I0422 12:03:46.599985   57828 retry.go:31] will retry after 1.061780231s: waiting for machine to come up
	I0422 12:03:47.663849   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:03:47.664365   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | unable to find current IP address of domain kubernetes-upgrade-643419 in network mk-kubernetes-upgrade-643419
	I0422 12:03:47.664391   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | I0422 12:03:47.664288   57828 retry.go:31] will retry after 1.409089001s: waiting for machine to come up
	I0422 12:03:49.075730   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:03:49.076187   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | unable to find current IP address of domain kubernetes-upgrade-643419 in network mk-kubernetes-upgrade-643419
	I0422 12:03:49.076223   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | I0422 12:03:49.076127   57828 retry.go:31] will retry after 1.784712601s: waiting for machine to come up
	I0422 12:03:50.863082   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:03:50.863499   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | unable to find current IP address of domain kubernetes-upgrade-643419 in network mk-kubernetes-upgrade-643419
	I0422 12:03:50.863527   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | I0422 12:03:50.863450   57828 retry.go:31] will retry after 2.040304194s: waiting for machine to come up
	I0422 12:03:52.906797   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:03:52.907388   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | unable to find current IP address of domain kubernetes-upgrade-643419 in network mk-kubernetes-upgrade-643419
	I0422 12:03:52.907415   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | I0422 12:03:52.907329   57828 retry.go:31] will retry after 3.502176684s: waiting for machine to come up
	I0422 12:03:56.411502   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:03:56.411964   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | unable to find current IP address of domain kubernetes-upgrade-643419 in network mk-kubernetes-upgrade-643419
	I0422 12:03:56.411987   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | I0422 12:03:56.411919   57828 retry.go:31] will retry after 4.490009194s: waiting for machine to come up
	I0422 12:04:00.903942   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:00.904433   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | unable to find current IP address of domain kubernetes-upgrade-643419 in network mk-kubernetes-upgrade-643419
	I0422 12:04:00.904455   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | I0422 12:04:00.904379   57828 retry.go:31] will retry after 3.654398713s: waiting for machine to come up
	I0422 12:04:04.563278   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:04.563874   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Found IP for machine: 192.168.50.54
	I0422 12:04:04.563901   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has current primary IP address 192.168.50.54 and MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:04.563908   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Reserving static IP address...
	I0422 12:04:04.564322   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-643419", mac: "52:54:00:8b:d0:37", ip: "192.168.50.54"} in network mk-kubernetes-upgrade-643419
	I0422 12:04:04.640135   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | Getting to WaitForSSH function...
	I0422 12:04:04.640171   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Reserved static IP address: 192.168.50.54
	I0422 12:04:04.640187   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Waiting for SSH to be available...
	I0422 12:04:04.643015   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:04.643565   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:d0:37", ip: ""} in network mk-kubernetes-upgrade-643419: {Iface:virbr2 ExpiryTime:2024-04-22 13:03:56 +0000 UTC Type:0 Mac:52:54:00:8b:d0:37 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8b:d0:37}
	I0422 12:04:04.643613   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined IP address 192.168.50.54 and MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:04.643746   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | Using SSH client type: external
	I0422 12:04:04.643776   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | Using SSH private key: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/kubernetes-upgrade-643419/id_rsa (-rw-------)
	I0422 12:04:04.643812   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18711-7633/.minikube/machines/kubernetes-upgrade-643419/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 12:04:04.643834   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | About to run SSH command:
	I0422 12:04:04.643847   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | exit 0
	I0422 12:04:04.769326   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | SSH cmd err, output: <nil>: 
	I0422 12:04:04.769548   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) KVM machine creation complete!
	I0422 12:04:04.769909   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetConfigRaw
	I0422 12:04:04.770439   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .DriverName
	I0422 12:04:04.770639   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .DriverName
	I0422 12:04:04.770801   57781 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0422 12:04:04.770813   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetState
	I0422 12:04:04.772124   57781 main.go:141] libmachine: Detecting operating system of created instance...
	I0422 12:04:04.772136   57781 main.go:141] libmachine: Waiting for SSH to be available...
	I0422 12:04:04.772141   57781 main.go:141] libmachine: Getting to WaitForSSH function...
	I0422 12:04:04.772148   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHHostname
	I0422 12:04:04.774466   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:04.774840   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:d0:37", ip: ""} in network mk-kubernetes-upgrade-643419: {Iface:virbr2 ExpiryTime:2024-04-22 13:03:56 +0000 UTC Type:0 Mac:52:54:00:8b:d0:37 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-643419 Clientid:01:52:54:00:8b:d0:37}
	I0422 12:04:04.774874   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined IP address 192.168.50.54 and MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:04.774984   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHPort
	I0422 12:04:04.775147   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHKeyPath
	I0422 12:04:04.775322   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHKeyPath
	I0422 12:04:04.775487   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHUsername
	I0422 12:04:04.775664   57781 main.go:141] libmachine: Using SSH client type: native
	I0422 12:04:04.775838   57781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.54 22 <nil> <nil>}
	I0422 12:04:04.775848   57781 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0422 12:04:04.876560   57781 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 12:04:04.876583   57781 main.go:141] libmachine: Detecting the provisioner...
	I0422 12:04:04.876591   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHHostname
	I0422 12:04:04.879341   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:04.879770   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:d0:37", ip: ""} in network mk-kubernetes-upgrade-643419: {Iface:virbr2 ExpiryTime:2024-04-22 13:03:56 +0000 UTC Type:0 Mac:52:54:00:8b:d0:37 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-643419 Clientid:01:52:54:00:8b:d0:37}
	I0422 12:04:04.879818   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined IP address 192.168.50.54 and MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:04.879938   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHPort
	I0422 12:04:04.880141   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHKeyPath
	I0422 12:04:04.880303   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHKeyPath
	I0422 12:04:04.880443   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHUsername
	I0422 12:04:04.880608   57781 main.go:141] libmachine: Using SSH client type: native
	I0422 12:04:04.880767   57781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.54 22 <nil> <nil>}
	I0422 12:04:04.880792   57781 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0422 12:04:04.986188   57781 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0422 12:04:04.986269   57781 main.go:141] libmachine: found compatible host: buildroot
	I0422 12:04:04.986292   57781 main.go:141] libmachine: Provisioning with buildroot...
	I0422 12:04:04.986306   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetMachineName
	I0422 12:04:04.986566   57781 buildroot.go:166] provisioning hostname "kubernetes-upgrade-643419"
	I0422 12:04:04.986600   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetMachineName
	I0422 12:04:04.986826   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHHostname
	I0422 12:04:04.989754   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:04.990241   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:d0:37", ip: ""} in network mk-kubernetes-upgrade-643419: {Iface:virbr2 ExpiryTime:2024-04-22 13:03:56 +0000 UTC Type:0 Mac:52:54:00:8b:d0:37 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-643419 Clientid:01:52:54:00:8b:d0:37}
	I0422 12:04:04.990276   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined IP address 192.168.50.54 and MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:04.990543   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHPort
	I0422 12:04:04.990768   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHKeyPath
	I0422 12:04:04.990947   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHKeyPath
	I0422 12:04:04.991106   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHUsername
	I0422 12:04:04.991288   57781 main.go:141] libmachine: Using SSH client type: native
	I0422 12:04:04.991491   57781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.54 22 <nil> <nil>}
	I0422 12:04:04.991504   57781 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-643419 && echo "kubernetes-upgrade-643419" | sudo tee /etc/hostname
	I0422 12:04:05.113944   57781 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-643419
	
	I0422 12:04:05.113969   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHHostname
	I0422 12:04:05.116815   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:05.117238   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:d0:37", ip: ""} in network mk-kubernetes-upgrade-643419: {Iface:virbr2 ExpiryTime:2024-04-22 13:03:56 +0000 UTC Type:0 Mac:52:54:00:8b:d0:37 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-643419 Clientid:01:52:54:00:8b:d0:37}
	I0422 12:04:05.117272   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined IP address 192.168.50.54 and MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:05.117392   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHPort
	I0422 12:04:05.117598   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHKeyPath
	I0422 12:04:05.117794   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHKeyPath
	I0422 12:04:05.117954   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHUsername
	I0422 12:04:05.118136   57781 main.go:141] libmachine: Using SSH client type: native
	I0422 12:04:05.118344   57781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.54 22 <nil> <nil>}
	I0422 12:04:05.118366   57781 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-643419' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-643419/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-643419' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 12:04:05.235397   57781 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 12:04:05.235423   57781 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18711-7633/.minikube CaCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18711-7633/.minikube}
	I0422 12:04:05.235440   57781 buildroot.go:174] setting up certificates
	I0422 12:04:05.235452   57781 provision.go:84] configureAuth start
	I0422 12:04:05.235463   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetMachineName
	I0422 12:04:05.235754   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetIP
	I0422 12:04:05.238436   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:05.238841   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:d0:37", ip: ""} in network mk-kubernetes-upgrade-643419: {Iface:virbr2 ExpiryTime:2024-04-22 13:03:56 +0000 UTC Type:0 Mac:52:54:00:8b:d0:37 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-643419 Clientid:01:52:54:00:8b:d0:37}
	I0422 12:04:05.238867   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined IP address 192.168.50.54 and MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:05.239012   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHHostname
	I0422 12:04:05.241319   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:05.241905   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:d0:37", ip: ""} in network mk-kubernetes-upgrade-643419: {Iface:virbr2 ExpiryTime:2024-04-22 13:03:56 +0000 UTC Type:0 Mac:52:54:00:8b:d0:37 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-643419 Clientid:01:52:54:00:8b:d0:37}
	I0422 12:04:05.241968   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined IP address 192.168.50.54 and MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:05.242178   57781 provision.go:143] copyHostCerts
	I0422 12:04:05.242281   57781 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem, removing ...
	I0422 12:04:05.242307   57781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem
	I0422 12:04:05.278740   57781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem (1078 bytes)
	I0422 12:04:05.278886   57781 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem, removing ...
	I0422 12:04:05.278899   57781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem
	I0422 12:04:05.278926   57781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem (1123 bytes)
	I0422 12:04:05.278999   57781 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem, removing ...
	I0422 12:04:05.279013   57781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem
	I0422 12:04:05.279034   57781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem (1679 bytes)
	I0422 12:04:05.279108   57781 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-643419 san=[127.0.0.1 192.168.50.54 kubernetes-upgrade-643419 localhost minikube]
	I0422 12:04:05.514374   57781 provision.go:177] copyRemoteCerts
	I0422 12:04:05.514455   57781 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 12:04:05.514480   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHHostname
	I0422 12:04:05.516822   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:05.517216   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:d0:37", ip: ""} in network mk-kubernetes-upgrade-643419: {Iface:virbr2 ExpiryTime:2024-04-22 13:03:56 +0000 UTC Type:0 Mac:52:54:00:8b:d0:37 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-643419 Clientid:01:52:54:00:8b:d0:37}
	I0422 12:04:05.517252   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined IP address 192.168.50.54 and MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:05.517465   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHPort
	I0422 12:04:05.517668   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHKeyPath
	I0422 12:04:05.517849   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHUsername
	I0422 12:04:05.518040   57781 sshutil.go:53] new ssh client: &{IP:192.168.50.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/kubernetes-upgrade-643419/id_rsa Username:docker}
	I0422 12:04:05.605813   57781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 12:04:05.640931   57781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0422 12:04:05.674327   57781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 12:04:05.756968   57781 provision.go:87] duration metric: took 521.504822ms to configureAuth
	I0422 12:04:05.757003   57781 buildroot.go:189] setting minikube options for container-runtime
	I0422 12:04:05.757203   57781 config.go:182] Loaded profile config "kubernetes-upgrade-643419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0422 12:04:05.757274   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHHostname
	I0422 12:04:05.760016   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:05.760519   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:d0:37", ip: ""} in network mk-kubernetes-upgrade-643419: {Iface:virbr2 ExpiryTime:2024-04-22 13:03:56 +0000 UTC Type:0 Mac:52:54:00:8b:d0:37 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-643419 Clientid:01:52:54:00:8b:d0:37}
	I0422 12:04:05.760552   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined IP address 192.168.50.54 and MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:05.760745   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHPort
	I0422 12:04:05.761000   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHKeyPath
	I0422 12:04:05.761175   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHKeyPath
	I0422 12:04:05.761335   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHUsername
	I0422 12:04:05.761563   57781 main.go:141] libmachine: Using SSH client type: native
	I0422 12:04:05.761728   57781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.54 22 <nil> <nil>}
	I0422 12:04:05.761754   57781 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 12:04:06.133566   57781 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 12:04:06.133598   57781 main.go:141] libmachine: Checking connection to Docker...
	I0422 12:04:06.133611   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetURL
	I0422 12:04:06.134984   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | Using libvirt version 6000000
	I0422 12:04:06.137460   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:06.137817   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:d0:37", ip: ""} in network mk-kubernetes-upgrade-643419: {Iface:virbr2 ExpiryTime:2024-04-22 13:03:56 +0000 UTC Type:0 Mac:52:54:00:8b:d0:37 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-643419 Clientid:01:52:54:00:8b:d0:37}
	I0422 12:04:06.137846   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined IP address 192.168.50.54 and MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:06.138127   57781 main.go:141] libmachine: Docker is up and running!
	I0422 12:04:06.138145   57781 main.go:141] libmachine: Reticulating splines...
	I0422 12:04:06.138153   57781 client.go:171] duration metric: took 25.365328791s to LocalClient.Create
	I0422 12:04:06.138178   57781 start.go:167] duration metric: took 25.365470467s to libmachine.API.Create "kubernetes-upgrade-643419"
	I0422 12:04:06.138202   57781 start.go:293] postStartSetup for "kubernetes-upgrade-643419" (driver="kvm2")
	I0422 12:04:06.138218   57781 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 12:04:06.138243   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .DriverName
	I0422 12:04:06.138483   57781 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 12:04:06.138506   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHHostname
	I0422 12:04:06.140885   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:06.141214   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:d0:37", ip: ""} in network mk-kubernetes-upgrade-643419: {Iface:virbr2 ExpiryTime:2024-04-22 13:03:56 +0000 UTC Type:0 Mac:52:54:00:8b:d0:37 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-643419 Clientid:01:52:54:00:8b:d0:37}
	I0422 12:04:06.141252   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined IP address 192.168.50.54 and MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:06.141431   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHPort
	I0422 12:04:06.141655   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHKeyPath
	I0422 12:04:06.141822   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHUsername
	I0422 12:04:06.141997   57781 sshutil.go:53] new ssh client: &{IP:192.168.50.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/kubernetes-upgrade-643419/id_rsa Username:docker}
	I0422 12:04:06.224336   57781 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 12:04:06.229428   57781 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 12:04:06.229452   57781 filesync.go:126] Scanning /home/jenkins/minikube-integration/18711-7633/.minikube/addons for local assets ...
	I0422 12:04:06.229536   57781 filesync.go:126] Scanning /home/jenkins/minikube-integration/18711-7633/.minikube/files for local assets ...
	I0422 12:04:06.229670   57781 filesync.go:149] local asset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> 149452.pem in /etc/ssl/certs
	I0422 12:04:06.229785   57781 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 12:04:06.239997   57781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem --> /etc/ssl/certs/149452.pem (1708 bytes)
	I0422 12:04:06.268859   57781 start.go:296] duration metric: took 130.641263ms for postStartSetup
	I0422 12:04:06.268914   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetConfigRaw
	I0422 12:04:06.269553   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetIP
	I0422 12:04:06.272307   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:06.272613   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:d0:37", ip: ""} in network mk-kubernetes-upgrade-643419: {Iface:virbr2 ExpiryTime:2024-04-22 13:03:56 +0000 UTC Type:0 Mac:52:54:00:8b:d0:37 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-643419 Clientid:01:52:54:00:8b:d0:37}
	I0422 12:04:06.272646   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined IP address 192.168.50.54 and MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:06.272899   57781 profile.go:143] Saving config to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/config.json ...
	I0422 12:04:06.273111   57781 start.go:128] duration metric: took 25.521122174s to createHost
	I0422 12:04:06.273140   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHHostname
	I0422 12:04:06.275351   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:06.275696   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:d0:37", ip: ""} in network mk-kubernetes-upgrade-643419: {Iface:virbr2 ExpiryTime:2024-04-22 13:03:56 +0000 UTC Type:0 Mac:52:54:00:8b:d0:37 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-643419 Clientid:01:52:54:00:8b:d0:37}
	I0422 12:04:06.275728   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined IP address 192.168.50.54 and MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:06.275954   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHPort
	I0422 12:04:06.276134   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHKeyPath
	I0422 12:04:06.276300   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHKeyPath
	I0422 12:04:06.276459   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHUsername
	I0422 12:04:06.276616   57781 main.go:141] libmachine: Using SSH client type: native
	I0422 12:04:06.276831   57781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.54 22 <nil> <nil>}
	I0422 12:04:06.276846   57781 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0422 12:04:06.382755   57781 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713787446.369492213
	
	I0422 12:04:06.382782   57781 fix.go:216] guest clock: 1713787446.369492213
	I0422 12:04:06.382792   57781 fix.go:229] Guest: 2024-04-22 12:04:06.369492213 +0000 UTC Remote: 2024-04-22 12:04:06.273126172 +0000 UTC m=+25.666288044 (delta=96.366041ms)
	I0422 12:04:06.382818   57781 fix.go:200] guest clock delta is within tolerance: 96.366041ms
	I0422 12:04:06.382824   57781 start.go:83] releasing machines lock for "kubernetes-upgrade-643419", held for 25.630938977s
	I0422 12:04:06.382855   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .DriverName
	I0422 12:04:06.383114   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetIP
	I0422 12:04:06.386082   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:06.386439   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:d0:37", ip: ""} in network mk-kubernetes-upgrade-643419: {Iface:virbr2 ExpiryTime:2024-04-22 13:03:56 +0000 UTC Type:0 Mac:52:54:00:8b:d0:37 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-643419 Clientid:01:52:54:00:8b:d0:37}
	I0422 12:04:06.386470   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined IP address 192.168.50.54 and MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:06.386656   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .DriverName
	I0422 12:04:06.387200   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .DriverName
	I0422 12:04:06.387376   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .DriverName
	I0422 12:04:06.387464   57781 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 12:04:06.387506   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHHostname
	I0422 12:04:06.387629   57781 ssh_runner.go:195] Run: cat /version.json
	I0422 12:04:06.387670   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHHostname
	I0422 12:04:06.390305   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:06.390625   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:06.390655   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:d0:37", ip: ""} in network mk-kubernetes-upgrade-643419: {Iface:virbr2 ExpiryTime:2024-04-22 13:03:56 +0000 UTC Type:0 Mac:52:54:00:8b:d0:37 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-643419 Clientid:01:52:54:00:8b:d0:37}
	I0422 12:04:06.390675   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined IP address 192.168.50.54 and MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:06.390838   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHPort
	I0422 12:04:06.391044   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHKeyPath
	I0422 12:04:06.391085   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:d0:37", ip: ""} in network mk-kubernetes-upgrade-643419: {Iface:virbr2 ExpiryTime:2024-04-22 13:03:56 +0000 UTC Type:0 Mac:52:54:00:8b:d0:37 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-643419 Clientid:01:52:54:00:8b:d0:37}
	I0422 12:04:06.391126   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined IP address 192.168.50.54 and MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:06.391224   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHPort
	I0422 12:04:06.391247   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHUsername
	I0422 12:04:06.391445   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHKeyPath
	I0422 12:04:06.391447   57781 sshutil.go:53] new ssh client: &{IP:192.168.50.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/kubernetes-upgrade-643419/id_rsa Username:docker}
	I0422 12:04:06.391590   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHUsername
	I0422 12:04:06.391734   57781 sshutil.go:53] new ssh client: &{IP:192.168.50.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/kubernetes-upgrade-643419/id_rsa Username:docker}
	I0422 12:04:06.495731   57781 ssh_runner.go:195] Run: systemctl --version
	I0422 12:04:06.503176   57781 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 12:04:06.670955   57781 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 12:04:06.679591   57781 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 12:04:06.679659   57781 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 12:04:06.699035   57781 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 12:04:06.699056   57781 start.go:494] detecting cgroup driver to use...
	I0422 12:04:06.699113   57781 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 12:04:06.719220   57781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 12:04:06.736567   57781 docker.go:217] disabling cri-docker service (if available) ...
	I0422 12:04:06.736633   57781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 12:04:06.753157   57781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 12:04:06.769498   57781 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 12:04:06.911313   57781 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 12:04:07.077849   57781 docker.go:233] disabling docker service ...
	I0422 12:04:07.077914   57781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 12:04:07.094809   57781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 12:04:07.109393   57781 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 12:04:07.266607   57781 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 12:04:07.389902   57781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 12:04:07.407547   57781 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 12:04:07.429085   57781 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0422 12:04:07.429160   57781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 12:04:07.441549   57781 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 12:04:07.441615   57781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 12:04:07.454260   57781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 12:04:07.466349   57781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 12:04:07.478589   57781 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 12:04:07.491494   57781 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 12:04:07.502346   57781 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 12:04:07.502421   57781 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 12:04:07.516950   57781 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 12:04:07.528406   57781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 12:04:07.659500   57781 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 12:04:07.821738   57781 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 12:04:07.821830   57781 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 12:04:07.828184   57781 start.go:562] Will wait 60s for crictl version
	I0422 12:04:07.828256   57781 ssh_runner.go:195] Run: which crictl
	I0422 12:04:07.832686   57781 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 12:04:07.881635   57781 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 12:04:07.881748   57781 ssh_runner.go:195] Run: crio --version
	I0422 12:04:07.928664   57781 ssh_runner.go:195] Run: crio --version
	I0422 12:04:07.966881   57781 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0422 12:04:07.968364   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetIP
	I0422 12:04:07.971987   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:07.972397   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:d0:37", ip: ""} in network mk-kubernetes-upgrade-643419: {Iface:virbr2 ExpiryTime:2024-04-22 13:03:56 +0000 UTC Type:0 Mac:52:54:00:8b:d0:37 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-643419 Clientid:01:52:54:00:8b:d0:37}
	I0422 12:04:07.972427   57781 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined IP address 192.168.50.54 and MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:04:07.972682   57781 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0422 12:04:07.977755   57781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 12:04:07.995587   57781 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-643419 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kube
rnetes-upgrade-643419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.54 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 12:04:07.995710   57781 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0422 12:04:07.995771   57781 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 12:04:08.035256   57781 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0422 12:04:08.035315   57781 ssh_runner.go:195] Run: which lz4
	I0422 12:04:08.040145   57781 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0422 12:04:08.045182   57781 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0422 12:04:08.045212   57781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0422 12:04:10.167331   57781 crio.go:462] duration metric: took 2.127225963s to copy over tarball
	I0422 12:04:10.167415   57781 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0422 12:04:13.126784   57781 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.959337166s)
	I0422 12:04:13.126811   57781 crio.go:469] duration metric: took 2.959451004s to extract the tarball
	I0422 12:04:13.126819   57781 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0422 12:04:13.178265   57781 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 12:04:13.230189   57781 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0422 12:04:13.230223   57781 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0422 12:04:13.230278   57781 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 12:04:13.230306   57781 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0422 12:04:13.230375   57781 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 12:04:13.230474   57781 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0422 12:04:13.230550   57781 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0422 12:04:13.230559   57781 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0422 12:04:13.230577   57781 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0422 12:04:13.230861   57781 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0422 12:04:13.231921   57781 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0422 12:04:13.231930   57781 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0422 12:04:13.231935   57781 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0422 12:04:13.231940   57781 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0422 12:04:13.231925   57781 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0422 12:04:13.231976   57781 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0422 12:04:13.232013   57781 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 12:04:13.232067   57781 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 12:04:13.377783   57781 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0422 12:04:13.388706   57781 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0422 12:04:13.392540   57781 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0422 12:04:13.399593   57781 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0422 12:04:13.415151   57781 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 12:04:13.429194   57781 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0422 12:04:13.437522   57781 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0422 12:04:13.480604   57781 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0422 12:04:13.480649   57781 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0422 12:04:13.480725   57781 ssh_runner.go:195] Run: which crictl
	I0422 12:04:13.534218   57781 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0422 12:04:13.534266   57781 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0422 12:04:13.534313   57781 ssh_runner.go:195] Run: which crictl
	I0422 12:04:13.558961   57781 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0422 12:04:13.559021   57781 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0422 12:04:13.559074   57781 ssh_runner.go:195] Run: which crictl
	I0422 12:04:13.592034   57781 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0422 12:04:13.592080   57781 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0422 12:04:13.592116   57781 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0422 12:04:13.592134   57781 ssh_runner.go:195] Run: which crictl
	I0422 12:04:13.592155   57781 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 12:04:13.592194   57781 ssh_runner.go:195] Run: which crictl
	I0422 12:04:13.613982   57781 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0422 12:04:13.614027   57781 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0422 12:04:13.614074   57781 ssh_runner.go:195] Run: which crictl
	I0422 12:04:13.627499   57781 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0422 12:04:13.627554   57781 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0422 12:04:13.627557   57781 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0422 12:04:13.627581   57781 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0422 12:04:13.627605   57781 ssh_runner.go:195] Run: which crictl
	I0422 12:04:13.627647   57781 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0422 12:04:13.627664   57781 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0422 12:04:13.627677   57781 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 12:04:13.627722   57781 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0422 12:04:13.797436   57781 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18711-7633/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0422 12:04:13.797489   57781 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18711-7633/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0422 12:04:13.797519   57781 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18711-7633/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0422 12:04:13.797582   57781 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18711-7633/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0422 12:04:13.797648   57781 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0422 12:04:13.797664   57781 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18711-7633/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0422 12:04:13.797737   57781 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18711-7633/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0422 12:04:13.839815   57781 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18711-7633/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0422 12:04:14.125676   57781 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 12:04:14.275109   57781 cache_images.go:92] duration metric: took 1.044866456s to LoadCachedImages
	W0422 12:04:14.275204   57781 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18711-7633/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18711-7633/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0422 12:04:14.275232   57781 kubeadm.go:928] updating node { 192.168.50.54 8443 v1.20.0 crio true true} ...
	I0422 12:04:14.275381   57781 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-643419 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-643419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 12:04:14.275470   57781 ssh_runner.go:195] Run: crio config
	I0422 12:04:14.338132   57781 cni.go:84] Creating CNI manager for ""
	I0422 12:04:14.338155   57781 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 12:04:14.338167   57781 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 12:04:14.338186   57781 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.54 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-643419 NodeName:kubernetes-upgrade-643419 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0422 12:04:14.338316   57781 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-643419"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 12:04:14.338375   57781 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0422 12:04:14.350243   57781 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 12:04:14.350312   57781 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 12:04:14.363199   57781 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0422 12:04:14.384406   57781 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 12:04:14.407600   57781 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0422 12:04:14.429079   57781 ssh_runner.go:195] Run: grep 192.168.50.54	control-plane.minikube.internal$ /etc/hosts
	I0422 12:04:14.433859   57781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 12:04:14.450133   57781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 12:04:14.594580   57781 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 12:04:14.624522   57781 certs.go:68] Setting up /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419 for IP: 192.168.50.54
	I0422 12:04:14.624551   57781 certs.go:194] generating shared ca certs ...
	I0422 12:04:14.624569   57781 certs.go:226] acquiring lock for ca certs: {Name:mk0b77082b88c771d0b00be5267ca31dfee6f85a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 12:04:14.624716   57781 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key
	I0422 12:04:14.624797   57781 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key
	I0422 12:04:14.624813   57781 certs.go:256] generating profile certs ...
	I0422 12:04:14.624882   57781 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/client.key
	I0422 12:04:14.624905   57781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/client.crt with IP's: []
	I0422 12:04:14.727805   57781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/client.crt ...
	I0422 12:04:14.727838   57781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/client.crt: {Name:mkb8c6b1ea37aff186fd5e8abe19cdb9e33bcea7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 12:04:14.728014   57781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/client.key ...
	I0422 12:04:14.728031   57781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/client.key: {Name:mk665a0aba1587eec40d6be28c903e3984ce8255 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 12:04:14.728132   57781 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/apiserver.key.8993c292
	I0422 12:04:14.728152   57781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/apiserver.crt.8993c292 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.54]
	I0422 12:04:14.826227   57781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/apiserver.crt.8993c292 ...
	I0422 12:04:14.826265   57781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/apiserver.crt.8993c292: {Name:mk2ed4724e89b61997d387331b1594eddce18511 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 12:04:14.826447   57781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/apiserver.key.8993c292 ...
	I0422 12:04:14.826466   57781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/apiserver.key.8993c292: {Name:mk5f35a834fc61a87fa9a287f994f3a7b88a5281 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 12:04:14.826546   57781 certs.go:381] copying /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/apiserver.crt.8993c292 -> /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/apiserver.crt
	I0422 12:04:14.826624   57781 certs.go:385] copying /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/apiserver.key.8993c292 -> /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/apiserver.key
	I0422 12:04:14.826679   57781 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/proxy-client.key
	I0422 12:04:14.826695   57781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/proxy-client.crt with IP's: []
	I0422 12:04:15.119879   57781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/proxy-client.crt ...
	I0422 12:04:15.119908   57781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/proxy-client.crt: {Name:mked305d08335a2b066245a370e49649b2104ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 12:04:15.120092   57781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/proxy-client.key ...
	I0422 12:04:15.120115   57781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/proxy-client.key: {Name:mk36add916daef3c24f892094ad328a87ec8fe7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 12:04:15.120332   57781 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem (1338 bytes)
	W0422 12:04:15.120380   57781 certs.go:480] ignoring /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945_empty.pem, impossibly tiny 0 bytes
	I0422 12:04:15.120395   57781 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem (1679 bytes)
	I0422 12:04:15.120425   57781 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem (1078 bytes)
	I0422 12:04:15.120455   57781 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem (1123 bytes)
	I0422 12:04:15.120486   57781 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem (1679 bytes)
	I0422 12:04:15.120541   57781 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem (1708 bytes)
	I0422 12:04:15.121182   57781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 12:04:15.155199   57781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 12:04:15.190573   57781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 12:04:15.221226   57781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0422 12:04:15.251740   57781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0422 12:04:15.280344   57781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 12:04:15.310657   57781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 12:04:15.344286   57781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0422 12:04:15.374959   57781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem --> /usr/share/ca-certificates/14945.pem (1338 bytes)
	I0422 12:04:15.406368   57781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem --> /usr/share/ca-certificates/149452.pem (1708 bytes)
	I0422 12:04:15.453690   57781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 12:04:15.494433   57781 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 12:04:15.523468   57781 ssh_runner.go:195] Run: openssl version
	I0422 12:04:15.533035   57781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14945.pem && ln -fs /usr/share/ca-certificates/14945.pem /etc/ssl/certs/14945.pem"
	I0422 12:04:15.548626   57781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14945.pem
	I0422 12:04:15.554263   57781 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 10:51 /usr/share/ca-certificates/14945.pem
	I0422 12:04:15.554331   57781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14945.pem
	I0422 12:04:15.560850   57781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14945.pem /etc/ssl/certs/51391683.0"
	I0422 12:04:15.573386   57781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149452.pem && ln -fs /usr/share/ca-certificates/149452.pem /etc/ssl/certs/149452.pem"
	I0422 12:04:15.585323   57781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149452.pem
	I0422 12:04:15.590557   57781 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 10:51 /usr/share/ca-certificates/149452.pem
	I0422 12:04:15.590625   57781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149452.pem
	I0422 12:04:15.597197   57781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149452.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 12:04:15.609013   57781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 12:04:15.622054   57781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 12:04:15.627321   57781 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0422 12:04:15.627380   57781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 12:04:15.633890   57781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 12:04:15.645430   57781 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 12:04:15.649949   57781 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0422 12:04:15.650010   57781 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-643419 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kuberne
tes-upgrade-643419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.54 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 12:04:15.650117   57781 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 12:04:15.650167   57781 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 12:04:15.698685   57781 cri.go:89] found id: ""
	I0422 12:04:15.698770   57781 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0422 12:04:15.709871   57781 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 12:04:15.720611   57781 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 12:04:15.731225   57781 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 12:04:15.731244   57781 kubeadm.go:156] found existing configuration files:
	
	I0422 12:04:15.731292   57781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 12:04:15.741210   57781 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 12:04:15.741263   57781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 12:04:15.754914   57781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 12:04:15.768104   57781 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 12:04:15.768168   57781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 12:04:15.780810   57781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 12:04:15.791051   57781 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 12:04:15.791102   57781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 12:04:15.805330   57781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 12:04:15.819096   57781 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 12:04:15.819161   57781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 12:04:15.829042   57781 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 12:04:15.970037   57781 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0422 12:04:15.970406   57781 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 12:04:16.134389   57781 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 12:04:16.134598   57781 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 12:04:16.134750   57781 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 12:04:16.404184   57781 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 12:04:16.406212   57781 out.go:204]   - Generating certificates and keys ...
	I0422 12:04:16.406316   57781 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 12:04:16.406410   57781 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 12:04:16.591272   57781 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0422 12:04:16.863698   57781 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0422 12:04:17.199778   57781 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0422 12:04:17.316008   57781 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0422 12:04:17.449353   57781 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0422 12:04:17.449664   57781 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-643419 localhost] and IPs [192.168.50.54 127.0.0.1 ::1]
	I0422 12:04:17.518296   57781 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0422 12:04:17.518667   57781 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-643419 localhost] and IPs [192.168.50.54 127.0.0.1 ::1]
	I0422 12:04:17.629624   57781 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0422 12:04:17.704866   57781 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0422 12:04:17.816420   57781 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0422 12:04:17.821272   57781 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 12:04:17.907170   57781 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 12:04:18.135163   57781 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 12:04:18.332412   57781 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 12:04:18.752364   57781 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 12:04:18.773735   57781 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 12:04:18.774308   57781 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 12:04:18.774532   57781 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 12:04:18.922349   57781 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 12:04:18.924483   57781 out.go:204]   - Booting up control plane ...
	I0422 12:04:18.924669   57781 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 12:04:18.941572   57781 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 12:04:18.944308   57781 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 12:04:18.945700   57781 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 12:04:18.951931   57781 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0422 12:04:58.951241   57781 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0422 12:04:58.951737   57781 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 12:04:58.952199   57781 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 12:05:03.953289   57781 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 12:05:03.953578   57781 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 12:05:13.954217   57781 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 12:05:13.954446   57781 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 12:05:33.955912   57781 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 12:05:33.956215   57781 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 12:06:13.955621   57781 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 12:06:13.955864   57781 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 12:06:13.955909   57781 kubeadm.go:309] 
	I0422 12:06:13.955973   57781 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0422 12:06:13.956064   57781 kubeadm.go:309] 		timed out waiting for the condition
	I0422 12:06:13.956081   57781 kubeadm.go:309] 
	I0422 12:06:13.956125   57781 kubeadm.go:309] 	This error is likely caused by:
	I0422 12:06:13.956178   57781 kubeadm.go:309] 		- The kubelet is not running
	I0422 12:06:13.956331   57781 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0422 12:06:13.956347   57781 kubeadm.go:309] 
	I0422 12:06:13.956489   57781 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0422 12:06:13.956538   57781 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0422 12:06:13.956585   57781 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0422 12:06:13.956595   57781 kubeadm.go:309] 
	I0422 12:06:13.956764   57781 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0422 12:06:13.956899   57781 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0422 12:06:13.956912   57781 kubeadm.go:309] 
	I0422 12:06:13.957007   57781 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0422 12:06:13.957126   57781 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0422 12:06:13.957210   57781 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0422 12:06:13.957305   57781 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0422 12:06:13.957319   57781 kubeadm.go:309] 
	I0422 12:06:13.957980   57781 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 12:06:13.958075   57781 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0422 12:06:13.958156   57781 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0422 12:06:13.958329   57781 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-643419 localhost] and IPs [192.168.50.54 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-643419 localhost] and IPs [192.168.50.54 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-643419 localhost] and IPs [192.168.50.54 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-643419 localhost] and IPs [192.168.50.54 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0422 12:06:13.958387   57781 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 12:06:15.841436   57781 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.883021078s)
	I0422 12:06:15.841506   57781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 12:06:15.860089   57781 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 12:06:15.873911   57781 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 12:06:15.873931   57781 kubeadm.go:156] found existing configuration files:
	
	I0422 12:06:15.873984   57781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 12:06:15.886834   57781 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 12:06:15.886913   57781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 12:06:15.900633   57781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 12:06:15.913592   57781 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 12:06:15.913664   57781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 12:06:15.925757   57781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 12:06:15.937868   57781 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 12:06:15.937936   57781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 12:06:15.951647   57781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 12:06:15.964488   57781 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 12:06:15.964557   57781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 12:06:15.978187   57781 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 12:06:16.062153   57781 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0422 12:06:16.062309   57781 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 12:06:16.249298   57781 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 12:06:16.249468   57781 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 12:06:16.249612   57781 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 12:06:16.506930   57781 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 12:06:16.510478   57781 out.go:204]   - Generating certificates and keys ...
	I0422 12:06:16.510579   57781 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 12:06:16.510686   57781 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 12:06:16.510819   57781 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 12:06:16.510912   57781 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 12:06:16.511026   57781 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 12:06:16.511102   57781 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 12:06:16.511189   57781 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 12:06:16.511286   57781 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 12:06:16.511404   57781 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 12:06:16.511529   57781 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 12:06:16.511594   57781 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 12:06:16.511673   57781 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 12:06:16.719019   57781 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 12:06:16.808483   57781 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 12:06:17.399550   57781 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 12:06:17.478636   57781 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 12:06:17.501443   57781 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 12:06:17.502978   57781 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 12:06:17.503055   57781 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 12:06:17.736113   57781 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 12:06:17.739312   57781 out.go:204]   - Booting up control plane ...
	I0422 12:06:17.739445   57781 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 12:06:17.753159   57781 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 12:06:17.759030   57781 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 12:06:17.761738   57781 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 12:06:17.775737   57781 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0422 12:06:57.778140   57781 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0422 12:06:57.778309   57781 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 12:06:57.778575   57781 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 12:07:02.779361   57781 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 12:07:02.779614   57781 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 12:07:12.780306   57781 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 12:07:12.780647   57781 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 12:07:32.781600   57781 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 12:07:32.781848   57781 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 12:08:12.781852   57781 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 12:08:12.782119   57781 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 12:08:12.782150   57781 kubeadm.go:309] 
	I0422 12:08:12.782202   57781 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0422 12:08:12.782255   57781 kubeadm.go:309] 		timed out waiting for the condition
	I0422 12:08:12.782268   57781 kubeadm.go:309] 
	I0422 12:08:12.782310   57781 kubeadm.go:309] 	This error is likely caused by:
	I0422 12:08:12.782347   57781 kubeadm.go:309] 		- The kubelet is not running
	I0422 12:08:12.782435   57781 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0422 12:08:12.782450   57781 kubeadm.go:309] 
	I0422 12:08:12.782593   57781 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0422 12:08:12.782642   57781 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0422 12:08:12.782706   57781 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0422 12:08:12.782723   57781 kubeadm.go:309] 
	I0422 12:08:12.782875   57781 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0422 12:08:12.782982   57781 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0422 12:08:12.782995   57781 kubeadm.go:309] 
	I0422 12:08:12.783111   57781 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0422 12:08:12.783257   57781 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0422 12:08:12.783371   57781 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0422 12:08:12.783492   57781 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0422 12:08:12.783504   57781 kubeadm.go:309] 
	I0422 12:08:12.784383   57781 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 12:08:12.784499   57781 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0422 12:08:12.784594   57781 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0422 12:08:12.784673   57781 kubeadm.go:393] duration metric: took 3m57.134666392s to StartCluster
	I0422 12:08:12.784712   57781 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 12:08:12.784761   57781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 12:08:12.836453   57781 cri.go:89] found id: ""
	I0422 12:08:12.836479   57781 logs.go:276] 0 containers: []
	W0422 12:08:12.836489   57781 logs.go:278] No container was found matching "kube-apiserver"
	I0422 12:08:12.836496   57781 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 12:08:12.836556   57781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 12:08:12.881462   57781 cri.go:89] found id: ""
	I0422 12:08:12.881493   57781 logs.go:276] 0 containers: []
	W0422 12:08:12.881505   57781 logs.go:278] No container was found matching "etcd"
	I0422 12:08:12.881512   57781 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 12:08:12.881574   57781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 12:08:12.930274   57781 cri.go:89] found id: ""
	I0422 12:08:12.930304   57781 logs.go:276] 0 containers: []
	W0422 12:08:12.930312   57781 logs.go:278] No container was found matching "coredns"
	I0422 12:08:12.930318   57781 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 12:08:12.930366   57781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 12:08:12.971666   57781 cri.go:89] found id: ""
	I0422 12:08:12.971693   57781 logs.go:276] 0 containers: []
	W0422 12:08:12.971704   57781 logs.go:278] No container was found matching "kube-scheduler"
	I0422 12:08:12.971712   57781 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 12:08:12.971778   57781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 12:08:13.014987   57781 cri.go:89] found id: ""
	I0422 12:08:13.015023   57781 logs.go:276] 0 containers: []
	W0422 12:08:13.015035   57781 logs.go:278] No container was found matching "kube-proxy"
	I0422 12:08:13.015043   57781 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 12:08:13.015108   57781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 12:08:13.059644   57781 cri.go:89] found id: ""
	I0422 12:08:13.059669   57781 logs.go:276] 0 containers: []
	W0422 12:08:13.059677   57781 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 12:08:13.059682   57781 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 12:08:13.059740   57781 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 12:08:13.098367   57781 cri.go:89] found id: ""
	I0422 12:08:13.098398   57781 logs.go:276] 0 containers: []
	W0422 12:08:13.098408   57781 logs.go:278] No container was found matching "kindnet"
	I0422 12:08:13.098419   57781 logs.go:123] Gathering logs for kubelet ...
	I0422 12:08:13.098433   57781 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 12:08:13.155610   57781 logs.go:123] Gathering logs for dmesg ...
	I0422 12:08:13.155643   57781 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 12:08:13.174059   57781 logs.go:123] Gathering logs for describe nodes ...
	I0422 12:08:13.174091   57781 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 12:08:13.310776   57781 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 12:08:13.310807   57781 logs.go:123] Gathering logs for CRI-O ...
	I0422 12:08:13.310823   57781 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 12:08:13.424381   57781 logs.go:123] Gathering logs for container status ...
	I0422 12:08:13.424415   57781 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0422 12:08:13.480532   57781 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0422 12:08:13.480591   57781 out.go:239] * 
	* 
	W0422 12:08:13.480691   57781 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0422 12:08:13.480724   57781 out.go:239] * 
	* 
	W0422 12:08:13.482006   57781 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0422 12:08:13.486331   57781 out.go:177] 
	W0422 12:08:13.487770   57781 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0422 12:08:13.487834   57781 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0422 12:08:13.487864   57781 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0422 12:08:13.489523   57781 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-643419 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-643419
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-643419: (3.326399593s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-643419 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-643419 status --format={{.Host}}: exit status 7 (78.282569ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-643419 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-643419 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (45.367923785s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-643419 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-643419 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-643419 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (105.769695ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-643419] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18711-7633/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18711-7633/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-643419
	    minikube start -p kubernetes-upgrade-643419 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6434192 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0, by running:
	    
	    minikube start -p kubernetes-upgrade-643419 --kubernetes-version=v1.30.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-643419 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-643419 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m22.162388609s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-04-22 12:10:24.661771644 +0000 UTC m=+5561.277362067
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-643419 -n kubernetes-upgrade-643419
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-643419 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-643419 logs -n 25: (2.330755028s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-230092 sudo                                | calico-230092         | jenkins | v1.33.0 | 22 Apr 24 12:09 UTC | 22 Apr 24 12:09 UTC |
	|         | systemctl cat kubelet                                |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-230092 sudo                                | calico-230092         | jenkins | v1.33.0 | 22 Apr 24 12:09 UTC | 22 Apr 24 12:09 UTC |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p calico-230092 sudo cat                            | calico-230092         | jenkins | v1.33.0 | 22 Apr 24 12:09 UTC | 22 Apr 24 12:09 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p calico-230092 sudo cat                            | calico-230092         | jenkins | v1.33.0 | 22 Apr 24 12:09 UTC | 22 Apr 24 12:09 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p calico-230092 sudo                                | calico-230092         | jenkins | v1.33.0 | 22 Apr 24 12:09 UTC |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p calico-230092 sudo                                | calico-230092         | jenkins | v1.33.0 | 22 Apr 24 12:09 UTC | 22 Apr 24 12:09 UTC |
	|         | systemctl cat docker                                 |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-230092 sudo cat                            | calico-230092         | jenkins | v1.33.0 | 22 Apr 24 12:09 UTC | 22 Apr 24 12:09 UTC |
	|         | /etc/docker/daemon.json                              |                       |         |         |                     |                     |
	| ssh     | -p calico-230092 sudo docker                         | calico-230092         | jenkins | v1.33.0 | 22 Apr 24 12:09 UTC |                     |
	|         | system info                                          |                       |         |         |                     |                     |
	| ssh     | -p calico-230092 sudo                                | calico-230092         | jenkins | v1.33.0 | 22 Apr 24 12:09 UTC |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p calico-230092 sudo                                | calico-230092         | jenkins | v1.33.0 | 22 Apr 24 12:09 UTC | 22 Apr 24 12:09 UTC |
	|         | systemctl cat cri-docker                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-230092 sudo cat                            | calico-230092         | jenkins | v1.33.0 | 22 Apr 24 12:09 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p calico-230092 sudo cat                            | calico-230092         | jenkins | v1.33.0 | 22 Apr 24 12:09 UTC | 22 Apr 24 12:09 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p calico-230092 sudo                                | calico-230092         | jenkins | v1.33.0 | 22 Apr 24 12:09 UTC | 22 Apr 24 12:09 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p calico-230092 sudo                                | calico-230092         | jenkins | v1.33.0 | 22 Apr 24 12:09 UTC |                     |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p calico-230092 sudo                                | calico-230092         | jenkins | v1.33.0 | 22 Apr 24 12:09 UTC | 22 Apr 24 12:09 UTC |
	|         | systemctl cat containerd                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-230092 sudo cat                            | calico-230092         | jenkins | v1.33.0 | 22 Apr 24 12:09 UTC | 22 Apr 24 12:09 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p calico-230092 sudo cat                            | calico-230092         | jenkins | v1.33.0 | 22 Apr 24 12:09 UTC | 22 Apr 24 12:09 UTC |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p calico-230092 sudo                                | calico-230092         | jenkins | v1.33.0 | 22 Apr 24 12:09 UTC | 22 Apr 24 12:09 UTC |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p calico-230092 sudo                                | calico-230092         | jenkins | v1.33.0 | 22 Apr 24 12:09 UTC | 22 Apr 24 12:09 UTC |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p calico-230092 sudo                                | calico-230092         | jenkins | v1.33.0 | 22 Apr 24 12:09 UTC | 22 Apr 24 12:09 UTC |
	|         | systemctl cat crio --no-pager                        |                       |         |         |                     |                     |
	| ssh     | -p calico-230092 sudo find                           | calico-230092         | jenkins | v1.33.0 | 22 Apr 24 12:09 UTC | 22 Apr 24 12:09 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p calico-230092 sudo crio                           | calico-230092         | jenkins | v1.33.0 | 22 Apr 24 12:09 UTC | 22 Apr 24 12:09 UTC |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p calico-230092                                     | calico-230092         | jenkins | v1.33.0 | 22 Apr 24 12:09 UTC | 22 Apr 24 12:09 UTC |
	| start   | -p flannel-230092                                    | flannel-230092        | jenkins | v1.33.0 | 22 Apr 24 12:09 UTC |                     |
	|         | --memory=3072                                        |                       |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                       |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                       |         |         |                     |                     |
	|         | --cni=flannel --driver=kvm2                          |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-230092 pgrep                       | custom-flannel-230092 | jenkins | v1.33.0 | 22 Apr 24 12:10 UTC | 22 Apr 24 12:10 UTC |
	|         | -a kubelet                                           |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 12:09:44
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 12:09:44.008551   66264 out.go:291] Setting OutFile to fd 1 ...
	I0422 12:09:44.008649   66264 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 12:09:44.008653   66264 out.go:304] Setting ErrFile to fd 2...
	I0422 12:09:44.008658   66264 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 12:09:44.008890   66264 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
	I0422 12:09:44.009453   66264 out.go:298] Setting JSON to false
	I0422 12:09:44.010861   66264 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6727,"bootTime":1713781057,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 12:09:44.011012   66264 start.go:139] virtualization: kvm guest
	I0422 12:09:44.013353   66264 out.go:177] * [flannel-230092] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 12:09:44.014931   66264 out.go:177]   - MINIKUBE_LOCATION=18711
	I0422 12:09:44.014974   66264 notify.go:220] Checking for updates...
	I0422 12:09:44.016511   66264 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 12:09:44.018086   66264 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18711-7633/kubeconfig
	I0422 12:09:44.019495   66264 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 12:09:44.020974   66264 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 12:09:44.022328   66264 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 12:09:44.024000   66264 config.go:182] Loaded profile config "custom-flannel-230092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 12:09:44.024113   66264 config.go:182] Loaded profile config "enable-default-cni-230092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 12:09:44.024190   66264 config.go:182] Loaded profile config "kubernetes-upgrade-643419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 12:09:44.024272   66264 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 12:09:44.061553   66264 out.go:177] * Using the kvm2 driver based on user configuration
	I0422 12:09:44.063121   66264 start.go:297] selected driver: kvm2
	I0422 12:09:44.063137   66264 start.go:901] validating driver "kvm2" against <nil>
	I0422 12:09:44.063154   66264 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 12:09:44.063903   66264 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 12:09:44.064016   66264 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18711-7633/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 12:09:44.080248   66264 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 12:09:44.080306   66264 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0422 12:09:44.080533   66264 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 12:09:44.080616   66264 cni.go:84] Creating CNI manager for "flannel"
	I0422 12:09:44.080633   66264 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0422 12:09:44.080709   66264 start.go:340] cluster config:
	{Name:flannel-230092 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:flannel-230092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 12:09:44.080863   66264 iso.go:125] acquiring lock: {Name:mkb6ac9fd17ffabc92a94047094130aad6203a95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 12:09:44.082526   66264 out.go:177] * Starting "flannel-230092" primary control-plane node in "flannel-230092" cluster
	I0422 12:09:39.994651   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:39.995199   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | unable to find current IP address of domain enable-default-cni-230092 in network mk-enable-default-cni-230092
	I0422 12:09:39.995230   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | I0422 12:09:39.995109   64751 retry.go:31] will retry after 3.219975575s: waiting for machine to come up
	I0422 12:09:43.674558   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:43.675085   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | unable to find current IP address of domain enable-default-cni-230092 in network mk-enable-default-cni-230092
	I0422 12:09:43.675114   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | I0422 12:09:43.675057   64751 retry.go:31] will retry after 4.508370188s: waiting for machine to come up
	I0422 12:09:45.398057   64175 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0422 12:09:45.398156   64175 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 12:09:45.398261   64175 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 12:09:45.398349   64175 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 12:09:45.398446   64175 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 12:09:45.398508   64175 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 12:09:45.400454   64175 out.go:204]   - Generating certificates and keys ...
	I0422 12:09:45.400535   64175 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 12:09:45.400593   64175 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 12:09:45.400658   64175 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0422 12:09:45.400710   64175 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0422 12:09:45.400761   64175 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0422 12:09:45.400832   64175 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0422 12:09:45.400882   64175 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0422 12:09:45.401065   64175 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-230092 localhost] and IPs [192.168.61.177 127.0.0.1 ::1]
	I0422 12:09:45.401147   64175 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0422 12:09:45.401265   64175 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-230092 localhost] and IPs [192.168.61.177 127.0.0.1 ::1]
	I0422 12:09:45.401324   64175 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0422 12:09:45.401386   64175 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0422 12:09:45.401437   64175 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0422 12:09:45.401490   64175 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 12:09:45.401537   64175 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 12:09:45.401593   64175 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0422 12:09:45.401640   64175 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 12:09:45.401693   64175 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 12:09:45.401757   64175 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 12:09:45.401821   64175 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 12:09:45.401880   64175 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 12:09:45.403442   64175 out.go:204]   - Booting up control plane ...
	I0422 12:09:45.403513   64175 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 12:09:45.403578   64175 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 12:09:45.403633   64175 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 12:09:45.403726   64175 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 12:09:45.403826   64175 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 12:09:45.403919   64175 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 12:09:45.404084   64175 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0422 12:09:45.404203   64175 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0422 12:09:45.404291   64175 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.00159967s
	I0422 12:09:45.404401   64175 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0422 12:09:45.404452   64175 kubeadm.go:309] [api-check] The API server is healthy after 6.0033638s
	I0422 12:09:45.404536   64175 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0422 12:09:45.404653   64175 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0422 12:09:45.404711   64175 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0422 12:09:45.404948   64175 kubeadm.go:309] [mark-control-plane] Marking the node custom-flannel-230092 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0422 12:09:45.405025   64175 kubeadm.go:309] [bootstrap-token] Using token: v9fal0.ob0ph2yjbdqel5bw
	I0422 12:09:45.406724   64175 out.go:204]   - Configuring RBAC rules ...
	I0422 12:09:45.406837   64175 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0422 12:09:45.406920   64175 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0422 12:09:45.407045   64175 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0422 12:09:45.407170   64175 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0422 12:09:45.407338   64175 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0422 12:09:45.407433   64175 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0422 12:09:45.407530   64175 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0422 12:09:45.407571   64175 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0422 12:09:45.407613   64175 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0422 12:09:45.407619   64175 kubeadm.go:309] 
	I0422 12:09:45.407698   64175 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0422 12:09:45.407707   64175 kubeadm.go:309] 
	I0422 12:09:45.407824   64175 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0422 12:09:45.407836   64175 kubeadm.go:309] 
	I0422 12:09:45.407882   64175 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0422 12:09:45.407966   64175 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0422 12:09:45.408039   64175 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0422 12:09:45.408049   64175 kubeadm.go:309] 
	I0422 12:09:45.408121   64175 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0422 12:09:45.408137   64175 kubeadm.go:309] 
	I0422 12:09:45.408204   64175 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0422 12:09:45.408222   64175 kubeadm.go:309] 
	I0422 12:09:45.408297   64175 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0422 12:09:45.408359   64175 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0422 12:09:45.408412   64175 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0422 12:09:45.408419   64175 kubeadm.go:309] 
	I0422 12:09:45.408481   64175 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0422 12:09:45.408601   64175 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0422 12:09:45.408618   64175 kubeadm.go:309] 
	I0422 12:09:45.408700   64175 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token v9fal0.ob0ph2yjbdqel5bw \
	I0422 12:09:45.408851   64175 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9af0990320e7d045d6d22d7e7276e7343f7b0ca44ae792f4916b4a79d803646f \
	I0422 12:09:45.408900   64175 kubeadm.go:309] 	--control-plane 
	I0422 12:09:45.408910   64175 kubeadm.go:309] 
	I0422 12:09:45.408995   64175 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0422 12:09:45.409005   64175 kubeadm.go:309] 
	I0422 12:09:45.409099   64175 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token v9fal0.ob0ph2yjbdqel5bw \
	I0422 12:09:45.409253   64175 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9af0990320e7d045d6d22d7e7276e7343f7b0ca44ae792f4916b4a79d803646f 
	I0422 12:09:45.409277   64175 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0422 12:09:45.411610   64175 out.go:177] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I0422 12:09:45.412744   64175 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0422 12:09:45.412814   64175 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/tmp/minikube/cni.yaml
	I0422 12:09:45.418805   64175 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%!s(MISSING) %!y(MISSING)" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I0422 12:09:45.418836   64175 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I0422 12:09:45.461792   64175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0422 12:09:45.949208   64175 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 12:09:45.949317   64175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-230092 minikube.k8s.io/updated_at=2024_04_22T12_09_45_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437 minikube.k8s.io/name=custom-flannel-230092 minikube.k8s.io/primary=true
	I0422 12:09:45.949319   64175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:09:45.975453   64175 ops.go:34] apiserver oom_adj: -16
	I0422 12:09:46.101793   64175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:09:44.083613   66264 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 12:09:44.083651   66264 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0422 12:09:44.083664   66264 cache.go:56] Caching tarball of preloaded images
	I0422 12:09:44.083748   66264 preload.go:173] Found /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 12:09:44.083759   66264 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 12:09:44.083879   66264 profile.go:143] Saving config to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/flannel-230092/config.json ...
	I0422 12:09:44.083902   66264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/flannel-230092/config.json: {Name:mk673c5449cec3d0c37ddb9180ffe372c761b9da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 12:09:44.084041   66264 start.go:360] acquireMachinesLock for flannel-230092: {Name:mk5cb9b294e703b264c1f97ac968ffd01e93b576 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 12:09:48.187463   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:48.188133   64410 main.go:141] libmachine: (enable-default-cni-230092) Found IP for machine: 192.168.39.18
	I0422 12:09:48.188152   64410 main.go:141] libmachine: (enable-default-cni-230092) Reserving static IP address...
	I0422 12:09:48.188165   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has current primary IP address 192.168.39.18 and MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:48.188607   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | unable to find host DHCP lease matching {name: "enable-default-cni-230092", mac: "52:54:00:81:5c:ed", ip: "192.168.39.18"} in network mk-enable-default-cni-230092
	I0422 12:09:48.264301   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | Getting to WaitForSSH function...
	I0422 12:09:48.264328   64410 main.go:141] libmachine: (enable-default-cni-230092) Reserved static IP address: 192.168.39.18
	I0422 12:09:48.264342   64410 main.go:141] libmachine: (enable-default-cni-230092) Waiting for SSH to be available...
	I0422 12:09:48.267518   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:48.267927   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:5c:ed", ip: ""} in network mk-enable-default-cni-230092: {Iface:virbr1 ExpiryTime:2024-04-22 13:09:42 +0000 UTC Type:0 Mac:52:54:00:81:5c:ed Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:minikube Clientid:01:52:54:00:81:5c:ed}
	I0422 12:09:48.267971   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined IP address 192.168.39.18 and MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:48.268106   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | Using SSH client type: external
	I0422 12:09:48.268138   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | Using SSH private key: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/enable-default-cni-230092/id_rsa (-rw-------)
	I0422 12:09:48.268170   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.18 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18711-7633/.minikube/machines/enable-default-cni-230092/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 12:09:48.268188   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | About to run SSH command:
	I0422 12:09:48.268201   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | exit 0
	I0422 12:09:48.397731   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | SSH cmd err, output: <nil>: 
	I0422 12:09:48.397963   64410 main.go:141] libmachine: (enable-default-cni-230092) KVM machine creation complete!
	I0422 12:09:48.398305   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetConfigRaw
	I0422 12:09:48.398907   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .DriverName
	I0422 12:09:48.399090   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .DriverName
	I0422 12:09:48.399272   64410 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0422 12:09:48.399287   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetState
	I0422 12:09:48.400421   64410 main.go:141] libmachine: Detecting operating system of created instance...
	I0422 12:09:48.400434   64410 main.go:141] libmachine: Waiting for SSH to be available...
	I0422 12:09:48.400440   64410 main.go:141] libmachine: Getting to WaitForSSH function...
	I0422 12:09:48.400446   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHHostname
	I0422 12:09:48.403499   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:48.404453   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:5c:ed", ip: ""} in network mk-enable-default-cni-230092: {Iface:virbr1 ExpiryTime:2024-04-22 13:09:42 +0000 UTC Type:0 Mac:52:54:00:81:5c:ed Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:enable-default-cni-230092 Clientid:01:52:54:00:81:5c:ed}
	I0422 12:09:48.404491   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined IP address 192.168.39.18 and MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:48.404639   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHPort
	I0422 12:09:48.404857   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHKeyPath
	I0422 12:09:48.405061   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHKeyPath
	I0422 12:09:48.405220   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHUsername
	I0422 12:09:48.405427   64410 main.go:141] libmachine: Using SSH client type: native
	I0422 12:09:48.405670   64410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0422 12:09:48.405686   64410 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0422 12:09:48.524312   64410 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 12:09:48.524335   64410 main.go:141] libmachine: Detecting the provisioner...
	I0422 12:09:48.524345   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHHostname
	I0422 12:09:48.527500   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:48.527897   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:5c:ed", ip: ""} in network mk-enable-default-cni-230092: {Iface:virbr1 ExpiryTime:2024-04-22 13:09:42 +0000 UTC Type:0 Mac:52:54:00:81:5c:ed Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:enable-default-cni-230092 Clientid:01:52:54:00:81:5c:ed}
	I0422 12:09:48.527935   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined IP address 192.168.39.18 and MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:48.528146   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHPort
	I0422 12:09:48.528334   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHKeyPath
	I0422 12:09:48.528520   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHKeyPath
	I0422 12:09:48.528671   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHUsername
	I0422 12:09:48.528963   64410 main.go:141] libmachine: Using SSH client type: native
	I0422 12:09:48.529194   64410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0422 12:09:48.529210   64410 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0422 12:09:48.646276   64410 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0422 12:09:48.646365   64410 main.go:141] libmachine: found compatible host: buildroot
	I0422 12:09:48.646376   64410 main.go:141] libmachine: Provisioning with buildroot...
	I0422 12:09:48.646384   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetMachineName
	I0422 12:09:48.646704   64410 buildroot.go:166] provisioning hostname "enable-default-cni-230092"
	I0422 12:09:48.646735   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetMachineName
	I0422 12:09:48.646946   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHHostname
	I0422 12:09:48.649845   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:48.650339   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:5c:ed", ip: ""} in network mk-enable-default-cni-230092: {Iface:virbr1 ExpiryTime:2024-04-22 13:09:42 +0000 UTC Type:0 Mac:52:54:00:81:5c:ed Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:enable-default-cni-230092 Clientid:01:52:54:00:81:5c:ed}
	I0422 12:09:48.650374   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined IP address 192.168.39.18 and MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:48.650566   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHPort
	I0422 12:09:48.650780   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHKeyPath
	I0422 12:09:48.650988   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHKeyPath
	I0422 12:09:48.651176   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHUsername
	I0422 12:09:48.651398   64410 main.go:141] libmachine: Using SSH client type: native
	I0422 12:09:48.651630   64410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0422 12:09:48.651650   64410 main.go:141] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-230092 && echo "enable-default-cni-230092" | sudo tee /etc/hostname
	I0422 12:09:48.781643   64410 main.go:141] libmachine: SSH cmd err, output: <nil>: enable-default-cni-230092
	
	I0422 12:09:48.781671   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHHostname
	I0422 12:09:48.784423   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:48.784807   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:5c:ed", ip: ""} in network mk-enable-default-cni-230092: {Iface:virbr1 ExpiryTime:2024-04-22 13:09:42 +0000 UTC Type:0 Mac:52:54:00:81:5c:ed Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:enable-default-cni-230092 Clientid:01:52:54:00:81:5c:ed}
	I0422 12:09:48.784839   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined IP address 192.168.39.18 and MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:48.785012   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHPort
	I0422 12:09:48.785209   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHKeyPath
	I0422 12:09:48.785396   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHKeyPath
	I0422 12:09:48.785530   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHUsername
	I0422 12:09:48.785722   64410 main.go:141] libmachine: Using SSH client type: native
	I0422 12:09:48.785864   64410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0422 12:09:48.785880   64410 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-230092' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-230092/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-230092' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 12:09:48.912855   64410 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 12:09:48.912889   64410 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18711-7633/.minikube CaCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18711-7633/.minikube}
	I0422 12:09:48.912951   64410 buildroot.go:174] setting up certificates
	I0422 12:09:48.912967   64410 provision.go:84] configureAuth start
	I0422 12:09:48.912986   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetMachineName
	I0422 12:09:48.913277   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetIP
	I0422 12:09:48.916321   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:48.916683   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:5c:ed", ip: ""} in network mk-enable-default-cni-230092: {Iface:virbr1 ExpiryTime:2024-04-22 13:09:42 +0000 UTC Type:0 Mac:52:54:00:81:5c:ed Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:enable-default-cni-230092 Clientid:01:52:54:00:81:5c:ed}
	I0422 12:09:48.916715   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined IP address 192.168.39.18 and MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:48.916875   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHHostname
	I0422 12:09:48.919020   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:48.919379   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:5c:ed", ip: ""} in network mk-enable-default-cni-230092: {Iface:virbr1 ExpiryTime:2024-04-22 13:09:42 +0000 UTC Type:0 Mac:52:54:00:81:5c:ed Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:enable-default-cni-230092 Clientid:01:52:54:00:81:5c:ed}
	I0422 12:09:48.919409   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined IP address 192.168.39.18 and MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:48.919585   64410 provision.go:143] copyHostCerts
	I0422 12:09:48.919657   64410 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem, removing ...
	I0422 12:09:48.919669   64410 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem
	I0422 12:09:48.919728   64410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem (1679 bytes)
	I0422 12:09:48.919861   64410 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem, removing ...
	I0422 12:09:48.919873   64410 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem
	I0422 12:09:48.919904   64410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem (1078 bytes)
	I0422 12:09:48.920028   64410 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem, removing ...
	I0422 12:09:48.920040   64410 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem
	I0422 12:09:48.920063   64410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem (1123 bytes)
	I0422 12:09:48.920139   64410 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-230092 san=[127.0.0.1 192.168.39.18 enable-default-cni-230092 localhost minikube]
	I0422 12:09:49.131909   64410 provision.go:177] copyRemoteCerts
	I0422 12:09:49.131966   64410 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 12:09:49.131988   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHHostname
	I0422 12:09:49.134751   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:49.135136   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:5c:ed", ip: ""} in network mk-enable-default-cni-230092: {Iface:virbr1 ExpiryTime:2024-04-22 13:09:42 +0000 UTC Type:0 Mac:52:54:00:81:5c:ed Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:enable-default-cni-230092 Clientid:01:52:54:00:81:5c:ed}
	I0422 12:09:49.135180   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined IP address 192.168.39.18 and MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:49.135319   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHPort
	I0422 12:09:49.135499   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHKeyPath
	I0422 12:09:49.135668   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHUsername
	I0422 12:09:49.135782   64410 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/enable-default-cni-230092/id_rsa Username:docker}
	I0422 12:09:49.228984   64410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 12:09:49.256419   64410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0422 12:09:49.284475   64410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 12:09:49.313288   64410 provision.go:87] duration metric: took 400.307438ms to configureAuth
	I0422 12:09:49.313311   64410 buildroot.go:189] setting minikube options for container-runtime
	I0422 12:09:49.313459   64410 config.go:182] Loaded profile config "enable-default-cni-230092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 12:09:49.313526   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHHostname
	I0422 12:09:49.315939   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:49.316378   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:5c:ed", ip: ""} in network mk-enable-default-cni-230092: {Iface:virbr1 ExpiryTime:2024-04-22 13:09:42 +0000 UTC Type:0 Mac:52:54:00:81:5c:ed Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:enable-default-cni-230092 Clientid:01:52:54:00:81:5c:ed}
	I0422 12:09:49.316402   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined IP address 192.168.39.18 and MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:49.316561   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHPort
	I0422 12:09:49.316742   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHKeyPath
	I0422 12:09:49.316969   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHKeyPath
	I0422 12:09:49.317102   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHUsername
	I0422 12:09:49.317270   64410 main.go:141] libmachine: Using SSH client type: native
	I0422 12:09:49.317444   64410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0422 12:09:49.317464   64410 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 12:09:49.890335   64536 start.go:364] duration metric: took 47.212563304s to acquireMachinesLock for "kubernetes-upgrade-643419"
	I0422 12:09:49.890392   64536 start.go:96] Skipping create...Using existing machine configuration
	I0422 12:09:49.890404   64536 fix.go:54] fixHost starting: 
	I0422 12:09:49.890843   64536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 12:09:49.890912   64536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 12:09:49.907593   64536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37709
	I0422 12:09:49.907993   64536 main.go:141] libmachine: () Calling .GetVersion
	I0422 12:09:49.908538   64536 main.go:141] libmachine: Using API Version  1
	I0422 12:09:49.908563   64536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 12:09:49.908904   64536 main.go:141] libmachine: () Calling .GetMachineName
	I0422 12:09:49.909106   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .DriverName
	I0422 12:09:49.909301   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetState
	I0422 12:09:49.910628   64536 fix.go:112] recreateIfNeeded on kubernetes-upgrade-643419: state=Running err=<nil>
	W0422 12:09:49.910649   64536 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 12:09:49.912514   64536 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-643419" VM ...
	I0422 12:09:46.602120   64175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:09:47.102567   64175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:09:47.602250   64175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:09:48.102471   64175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:09:48.602807   64175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:09:49.102740   64175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:09:49.602674   64175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:09:50.102242   64175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:09:50.602834   64175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:09:51.102688   64175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:09:49.613500   64410 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 12:09:49.613530   64410 main.go:141] libmachine: Checking connection to Docker...
	I0422 12:09:49.613542   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetURL
	I0422 12:09:49.614910   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | Using libvirt version 6000000
	I0422 12:09:49.617467   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:49.617885   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:5c:ed", ip: ""} in network mk-enable-default-cni-230092: {Iface:virbr1 ExpiryTime:2024-04-22 13:09:42 +0000 UTC Type:0 Mac:52:54:00:81:5c:ed Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:enable-default-cni-230092 Clientid:01:52:54:00:81:5c:ed}
	I0422 12:09:49.617915   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined IP address 192.168.39.18 and MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:49.618086   64410 main.go:141] libmachine: Docker is up and running!
	I0422 12:09:49.618100   64410 main.go:141] libmachine: Reticulating splines...
	I0422 12:09:49.618108   64410 client.go:171] duration metric: took 24.901722456s to LocalClient.Create
	I0422 12:09:49.618150   64410 start.go:167] duration metric: took 24.90180486s to libmachine.API.Create "enable-default-cni-230092"
	I0422 12:09:49.618162   64410 start.go:293] postStartSetup for "enable-default-cni-230092" (driver="kvm2")
	I0422 12:09:49.618175   64410 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 12:09:49.618204   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .DriverName
	I0422 12:09:49.618442   64410 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 12:09:49.618474   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHHostname
	I0422 12:09:49.621219   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:49.621655   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:5c:ed", ip: ""} in network mk-enable-default-cni-230092: {Iface:virbr1 ExpiryTime:2024-04-22 13:09:42 +0000 UTC Type:0 Mac:52:54:00:81:5c:ed Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:enable-default-cni-230092 Clientid:01:52:54:00:81:5c:ed}
	I0422 12:09:49.621712   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined IP address 192.168.39.18 and MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:49.621874   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHPort
	I0422 12:09:49.622097   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHKeyPath
	I0422 12:09:49.622310   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHUsername
	I0422 12:09:49.622456   64410 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/enable-default-cni-230092/id_rsa Username:docker}
	I0422 12:09:49.713189   64410 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 12:09:49.718215   64410 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 12:09:49.718235   64410 filesync.go:126] Scanning /home/jenkins/minikube-integration/18711-7633/.minikube/addons for local assets ...
	I0422 12:09:49.718289   64410 filesync.go:126] Scanning /home/jenkins/minikube-integration/18711-7633/.minikube/files for local assets ...
	I0422 12:09:49.718367   64410 filesync.go:149] local asset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> 149452.pem in /etc/ssl/certs
	I0422 12:09:49.718456   64410 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 12:09:49.730751   64410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem --> /etc/ssl/certs/149452.pem (1708 bytes)
	I0422 12:09:49.759095   64410 start.go:296] duration metric: took 140.92002ms for postStartSetup
	I0422 12:09:49.759146   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetConfigRaw
	I0422 12:09:49.759698   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetIP
	I0422 12:09:49.762577   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:49.762959   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:5c:ed", ip: ""} in network mk-enable-default-cni-230092: {Iface:virbr1 ExpiryTime:2024-04-22 13:09:42 +0000 UTC Type:0 Mac:52:54:00:81:5c:ed Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:enable-default-cni-230092 Clientid:01:52:54:00:81:5c:ed}
	I0422 12:09:49.762990   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined IP address 192.168.39.18 and MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:49.763192   64410 profile.go:143] Saving config to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/enable-default-cni-230092/config.json ...
	I0422 12:09:49.763369   64410 start.go:128] duration metric: took 25.072282136s to createHost
	I0422 12:09:49.763391   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHHostname
	I0422 12:09:49.766091   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:49.766508   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:5c:ed", ip: ""} in network mk-enable-default-cni-230092: {Iface:virbr1 ExpiryTime:2024-04-22 13:09:42 +0000 UTC Type:0 Mac:52:54:00:81:5c:ed Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:enable-default-cni-230092 Clientid:01:52:54:00:81:5c:ed}
	I0422 12:09:49.766542   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined IP address 192.168.39.18 and MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:49.766731   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHPort
	I0422 12:09:49.766959   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHKeyPath
	I0422 12:09:49.767171   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHKeyPath
	I0422 12:09:49.767336   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHUsername
	I0422 12:09:49.767568   64410 main.go:141] libmachine: Using SSH client type: native
	I0422 12:09:49.767844   64410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0422 12:09:49.767872   64410 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 12:09:49.890164   64410 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713787789.832978643
	
	I0422 12:09:49.890198   64410 fix.go:216] guest clock: 1713787789.832978643
	I0422 12:09:49.890217   64410 fix.go:229] Guest: 2024-04-22 12:09:49.832978643 +0000 UTC Remote: 2024-04-22 12:09:49.763380546 +0000 UTC m=+55.429292361 (delta=69.598097ms)
	I0422 12:09:49.890244   64410 fix.go:200] guest clock delta is within tolerance: 69.598097ms
	I0422 12:09:49.890250   64410 start.go:83] releasing machines lock for "enable-default-cni-230092", held for 25.199382882s
	I0422 12:09:49.890271   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .DriverName
	I0422 12:09:49.890534   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetIP
	I0422 12:09:49.893505   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:49.893956   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:5c:ed", ip: ""} in network mk-enable-default-cni-230092: {Iface:virbr1 ExpiryTime:2024-04-22 13:09:42 +0000 UTC Type:0 Mac:52:54:00:81:5c:ed Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:enable-default-cni-230092 Clientid:01:52:54:00:81:5c:ed}
	I0422 12:09:49.893991   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined IP address 192.168.39.18 and MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:49.894252   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .DriverName
	I0422 12:09:49.894783   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .DriverName
	I0422 12:09:49.894966   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .DriverName
	I0422 12:09:49.895069   64410 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 12:09:49.895105   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHHostname
	I0422 12:09:49.895198   64410 ssh_runner.go:195] Run: cat /version.json
	I0422 12:09:49.895230   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHHostname
	I0422 12:09:49.898120   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:49.898467   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:49.898604   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:5c:ed", ip: ""} in network mk-enable-default-cni-230092: {Iface:virbr1 ExpiryTime:2024-04-22 13:09:42 +0000 UTC Type:0 Mac:52:54:00:81:5c:ed Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:enable-default-cni-230092 Clientid:01:52:54:00:81:5c:ed}
	I0422 12:09:49.898629   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined IP address 192.168.39.18 and MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:49.898813   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHPort
	I0422 12:09:49.899021   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHKeyPath
	I0422 12:09:49.899043   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:5c:ed", ip: ""} in network mk-enable-default-cni-230092: {Iface:virbr1 ExpiryTime:2024-04-22 13:09:42 +0000 UTC Type:0 Mac:52:54:00:81:5c:ed Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:enable-default-cni-230092 Clientid:01:52:54:00:81:5c:ed}
	I0422 12:09:49.899065   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined IP address 192.168.39.18 and MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:49.899206   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHUsername
	I0422 12:09:49.899292   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHPort
	I0422 12:09:49.899367   64410 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/enable-default-cni-230092/id_rsa Username:docker}
	I0422 12:09:49.899444   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHKeyPath
	I0422 12:09:49.899547   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetSSHUsername
	I0422 12:09:49.899701   64410 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/enable-default-cni-230092/id_rsa Username:docker}
	I0422 12:09:49.982530   64410 ssh_runner.go:195] Run: systemctl --version
	I0422 12:09:50.009056   64410 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 12:09:50.175638   64410 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 12:09:50.185206   64410 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 12:09:50.185269   64410 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 12:09:50.206928   64410 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 12:09:50.206950   64410 start.go:494] detecting cgroup driver to use...
	I0422 12:09:50.207023   64410 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 12:09:50.227758   64410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 12:09:50.246960   64410 docker.go:217] disabling cri-docker service (if available) ...
	I0422 12:09:50.247025   64410 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 12:09:50.264535   64410 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 12:09:50.280354   64410 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 12:09:50.411987   64410 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 12:09:50.585696   64410 docker.go:233] disabling docker service ...
	I0422 12:09:50.585756   64410 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 12:09:50.604948   64410 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 12:09:50.625409   64410 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 12:09:50.796727   64410 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 12:09:50.942116   64410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 12:09:50.961568   64410 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 12:09:50.984005   64410 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 12:09:50.984070   64410 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 12:09:50.995729   64410 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 12:09:50.995803   64410 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 12:09:51.007054   64410 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 12:09:51.020984   64410 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 12:09:51.032349   64410 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 12:09:51.044138   64410 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 12:09:51.055694   64410 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 12:09:51.075015   64410 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 12:09:51.088018   64410 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 12:09:51.098991   64410 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 12:09:51.099045   64410 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 12:09:51.113709   64410 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 12:09:51.124290   64410 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 12:09:51.256117   64410 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 12:09:51.418220   64410 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 12:09:51.418290   64410 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 12:09:51.423720   64410 start.go:562] Will wait 60s for crictl version
	I0422 12:09:51.423772   64410 ssh_runner.go:195] Run: which crictl
	I0422 12:09:51.428476   64410 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 12:09:51.467154   64410 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 12:09:51.467256   64410 ssh_runner.go:195] Run: crio --version
	I0422 12:09:51.500221   64410 ssh_runner.go:195] Run: crio --version
	I0422 12:09:51.532629   64410 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 12:09:49.913849   64536 machine.go:94] provisionDockerMachine start ...
	I0422 12:09:49.913875   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .DriverName
	I0422 12:09:49.914095   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHHostname
	I0422 12:09:49.916559   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:09:49.917027   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:d0:37", ip: ""} in network mk-kubernetes-upgrade-643419: {Iface:virbr2 ExpiryTime:2024-04-22 13:08:30 +0000 UTC Type:0 Mac:52:54:00:8b:d0:37 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-643419 Clientid:01:52:54:00:8b:d0:37}
	I0422 12:09:49.917063   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined IP address 192.168.50.54 and MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:09:49.917189   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHPort
	I0422 12:09:49.917360   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHKeyPath
	I0422 12:09:49.917532   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHKeyPath
	I0422 12:09:49.917682   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHUsername
	I0422 12:09:49.917853   64536 main.go:141] libmachine: Using SSH client type: native
	I0422 12:09:49.918091   64536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.54 22 <nil> <nil>}
	I0422 12:09:49.918108   64536 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 12:09:50.034255   64536 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-643419
	
	I0422 12:09:50.034286   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetMachineName
	I0422 12:09:50.034602   64536 buildroot.go:166] provisioning hostname "kubernetes-upgrade-643419"
	I0422 12:09:50.034633   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetMachineName
	I0422 12:09:50.034897   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHHostname
	I0422 12:09:50.038228   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:09:50.038809   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:d0:37", ip: ""} in network mk-kubernetes-upgrade-643419: {Iface:virbr2 ExpiryTime:2024-04-22 13:08:30 +0000 UTC Type:0 Mac:52:54:00:8b:d0:37 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-643419 Clientid:01:52:54:00:8b:d0:37}
	I0422 12:09:50.038841   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined IP address 192.168.50.54 and MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:09:50.038998   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHPort
	I0422 12:09:50.039207   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHKeyPath
	I0422 12:09:50.039403   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHKeyPath
	I0422 12:09:50.039546   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHUsername
	I0422 12:09:50.039726   64536 main.go:141] libmachine: Using SSH client type: native
	I0422 12:09:50.039945   64536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.54 22 <nil> <nil>}
	I0422 12:09:50.039964   64536 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-643419 && echo "kubernetes-upgrade-643419" | sudo tee /etc/hostname
	I0422 12:09:50.179418   64536 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-643419
	
	I0422 12:09:50.179447   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHHostname
	I0422 12:09:50.182620   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:09:50.183118   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:d0:37", ip: ""} in network mk-kubernetes-upgrade-643419: {Iface:virbr2 ExpiryTime:2024-04-22 13:08:30 +0000 UTC Type:0 Mac:52:54:00:8b:d0:37 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-643419 Clientid:01:52:54:00:8b:d0:37}
	I0422 12:09:50.183164   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined IP address 192.168.50.54 and MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:09:50.183256   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHPort
	I0422 12:09:50.183509   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHKeyPath
	I0422 12:09:50.183731   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHKeyPath
	I0422 12:09:50.183917   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHUsername
	I0422 12:09:50.184140   64536 main.go:141] libmachine: Using SSH client type: native
	I0422 12:09:50.184335   64536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.54 22 <nil> <nil>}
	I0422 12:09:50.184353   64536 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-643419' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-643419/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-643419' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 12:09:50.311783   64536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 12:09:50.311837   64536 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18711-7633/.minikube CaCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18711-7633/.minikube}
	I0422 12:09:50.311882   64536 buildroot.go:174] setting up certificates
	I0422 12:09:50.311896   64536 provision.go:84] configureAuth start
	I0422 12:09:50.311914   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetMachineName
	I0422 12:09:50.312238   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetIP
	I0422 12:09:50.314791   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:09:50.315239   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:d0:37", ip: ""} in network mk-kubernetes-upgrade-643419: {Iface:virbr2 ExpiryTime:2024-04-22 13:08:30 +0000 UTC Type:0 Mac:52:54:00:8b:d0:37 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-643419 Clientid:01:52:54:00:8b:d0:37}
	I0422 12:09:50.315273   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined IP address 192.168.50.54 and MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:09:50.315402   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHHostname
	I0422 12:09:50.317627   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:09:50.317955   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:d0:37", ip: ""} in network mk-kubernetes-upgrade-643419: {Iface:virbr2 ExpiryTime:2024-04-22 13:08:30 +0000 UTC Type:0 Mac:52:54:00:8b:d0:37 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-643419 Clientid:01:52:54:00:8b:d0:37}
	I0422 12:09:50.317978   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined IP address 192.168.50.54 and MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:09:50.318123   64536 provision.go:143] copyHostCerts
	I0422 12:09:50.318170   64536 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem, removing ...
	I0422 12:09:50.318181   64536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem
	I0422 12:09:50.318230   64536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem (1078 bytes)
	I0422 12:09:50.318337   64536 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem, removing ...
	I0422 12:09:50.318346   64536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem
	I0422 12:09:50.318366   64536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem (1123 bytes)
	I0422 12:09:50.318444   64536 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem, removing ...
	I0422 12:09:50.318451   64536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem
	I0422 12:09:50.318470   64536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem (1679 bytes)
	I0422 12:09:50.318525   64536 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-643419 san=[127.0.0.1 192.168.50.54 kubernetes-upgrade-643419 localhost minikube]
	I0422 12:09:50.558918   64536 provision.go:177] copyRemoteCerts
	I0422 12:09:50.558971   64536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 12:09:50.558992   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHHostname
	I0422 12:09:50.562076   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:09:50.562493   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:d0:37", ip: ""} in network mk-kubernetes-upgrade-643419: {Iface:virbr2 ExpiryTime:2024-04-22 13:08:30 +0000 UTC Type:0 Mac:52:54:00:8b:d0:37 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-643419 Clientid:01:52:54:00:8b:d0:37}
	I0422 12:09:50.562524   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined IP address 192.168.50.54 and MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:09:50.562664   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHPort
	I0422 12:09:50.562873   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHKeyPath
	I0422 12:09:50.563031   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHUsername
	I0422 12:09:50.563175   64536 sshutil.go:53] new ssh client: &{IP:192.168.50.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/kubernetes-upgrade-643419/id_rsa Username:docker}
	I0422 12:09:50.655706   64536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0422 12:09:50.697783   64536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 12:09:50.732525   64536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0422 12:09:50.768169   64536 provision.go:87] duration metric: took 456.256159ms to configureAuth
	I0422 12:09:50.768203   64536 buildroot.go:189] setting minikube options for container-runtime
	I0422 12:09:50.768409   64536 config.go:182] Loaded profile config "kubernetes-upgrade-643419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 12:09:50.768495   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHHostname
	I0422 12:09:50.771489   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:09:50.771878   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:d0:37", ip: ""} in network mk-kubernetes-upgrade-643419: {Iface:virbr2 ExpiryTime:2024-04-22 13:08:30 +0000 UTC Type:0 Mac:52:54:00:8b:d0:37 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-643419 Clientid:01:52:54:00:8b:d0:37}
	I0422 12:09:50.771906   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined IP address 192.168.50.54 and MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:09:50.772081   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHPort
	I0422 12:09:50.772321   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHKeyPath
	I0422 12:09:50.772543   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHKeyPath
	I0422 12:09:50.772697   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHUsername
	I0422 12:09:50.772911   64536 main.go:141] libmachine: Using SSH client type: native
	I0422 12:09:50.773089   64536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.54 22 <nil> <nil>}
	I0422 12:09:50.773117   64536 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 12:09:51.534054   64410 main.go:141] libmachine: (enable-default-cni-230092) Calling .GetIP
	I0422 12:09:51.536549   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:51.536951   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:5c:ed", ip: ""} in network mk-enable-default-cni-230092: {Iface:virbr1 ExpiryTime:2024-04-22 13:09:42 +0000 UTC Type:0 Mac:52:54:00:81:5c:ed Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:enable-default-cni-230092 Clientid:01:52:54:00:81:5c:ed}
	I0422 12:09:51.536983   64410 main.go:141] libmachine: (enable-default-cni-230092) DBG | domain enable-default-cni-230092 has defined IP address 192.168.39.18 and MAC address 52:54:00:81:5c:ed in network mk-enable-default-cni-230092
	I0422 12:09:51.537126   64410 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0422 12:09:51.541769   64410 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 12:09:51.556668   64410 kubeadm.go:877] updating cluster {Name:enable-default-cni-230092 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:enab
le-default-cni-230092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 12:09:51.556811   64410 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 12:09:51.556871   64410 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 12:09:51.592716   64410 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0422 12:09:51.592815   64410 ssh_runner.go:195] Run: which lz4
	I0422 12:09:51.597295   64410 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0422 12:09:51.602130   64410 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0422 12:09:51.602159   64410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0422 12:09:53.283250   64410 crio.go:462] duration metric: took 1.685997085s to copy over tarball
	I0422 12:09:53.283317   64410 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0422 12:09:51.602503   64175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:09:52.102939   64175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:09:52.602926   64175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:09:53.102415   64175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:09:53.602691   64175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:09:54.102251   64175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:09:54.601850   64175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:09:55.101969   64175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:09:55.601935   64175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:09:56.102806   64175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:09:56.060344   64410 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.776999265s)
	I0422 12:09:56.060380   64410 crio.go:469] duration metric: took 2.777100466s to extract the tarball
	I0422 12:09:56.060390   64410 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0422 12:09:56.099297   64410 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 12:09:56.160442   64410 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 12:09:56.160478   64410 cache_images.go:84] Images are preloaded, skipping loading
	I0422 12:09:56.160490   64410 kubeadm.go:928] updating node { 192.168.39.18 8443 v1.30.0 crio true true} ...
	I0422 12:09:56.160630   64410 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=enable-default-cni-230092 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:enable-default-cni-230092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0422 12:09:56.160744   64410 ssh_runner.go:195] Run: crio config
	I0422 12:09:56.215669   64410 cni.go:84] Creating CNI manager for "bridge"
	I0422 12:09:56.215701   64410 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 12:09:56.215737   64410 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.18 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-230092 NodeName:enable-default-cni-230092 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.18 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 12:09:56.215927   64410 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.18
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "enable-default-cni-230092"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.18
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.18"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 12:09:56.215997   64410 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 12:09:56.227830   64410 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 12:09:56.227904   64410 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 12:09:56.239560   64410 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0422 12:09:56.261615   64410 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 12:09:56.283292   64410 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0422 12:09:56.306976   64410 ssh_runner.go:195] Run: grep 192.168.39.18	control-plane.minikube.internal$ /etc/hosts
	I0422 12:09:56.313011   64410 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.18	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 12:09:56.329680   64410 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 12:09:56.467137   64410 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 12:09:56.485819   64410 certs.go:68] Setting up /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/enable-default-cni-230092 for IP: 192.168.39.18
	I0422 12:09:56.485854   64410 certs.go:194] generating shared ca certs ...
	I0422 12:09:56.485880   64410 certs.go:226] acquiring lock for ca certs: {Name:mk0b77082b88c771d0b00be5267ca31dfee6f85a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 12:09:56.486064   64410 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key
	I0422 12:09:56.486126   64410 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key
	I0422 12:09:56.486139   64410 certs.go:256] generating profile certs ...
	I0422 12:09:56.486217   64410 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/enable-default-cni-230092/client.key
	I0422 12:09:56.486238   64410 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/enable-default-cni-230092/client.crt with IP's: []
	I0422 12:09:56.568015   64410 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/enable-default-cni-230092/client.crt ...
	I0422 12:09:56.568052   64410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/enable-default-cni-230092/client.crt: {Name:mke7a2bc195846629822c5676b825db3e1fba09a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 12:09:56.568264   64410 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/enable-default-cni-230092/client.key ...
	I0422 12:09:56.568282   64410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/enable-default-cni-230092/client.key: {Name:mk08ef015f47f9f61518b87e6d77d96d526019ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 12:09:56.568391   64410 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/enable-default-cni-230092/apiserver.key.656c0eca
	I0422 12:09:56.568418   64410 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/enable-default-cni-230092/apiserver.crt.656c0eca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.18]
	I0422 12:09:56.749473   64410 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/enable-default-cni-230092/apiserver.crt.656c0eca ...
	I0422 12:09:56.749507   64410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/enable-default-cni-230092/apiserver.crt.656c0eca: {Name:mk09cdc026f1176a9c6f5bfa271add01a8034ef7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 12:09:56.749680   64410 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/enable-default-cni-230092/apiserver.key.656c0eca ...
	I0422 12:09:56.749697   64410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/enable-default-cni-230092/apiserver.key.656c0eca: {Name:mk0ecb36bd6937cb2fcfc8ea4e49cf95a3a0ed82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 12:09:56.749797   64410 certs.go:381] copying /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/enable-default-cni-230092/apiserver.crt.656c0eca -> /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/enable-default-cni-230092/apiserver.crt
	I0422 12:09:56.749902   64410 certs.go:385] copying /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/enable-default-cni-230092/apiserver.key.656c0eca -> /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/enable-default-cni-230092/apiserver.key
	I0422 12:09:56.749979   64410 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/enable-default-cni-230092/proxy-client.key
	I0422 12:09:56.749999   64410 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/enable-default-cni-230092/proxy-client.crt with IP's: []
	I0422 12:09:57.167754   64410 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/enable-default-cni-230092/proxy-client.crt ...
	I0422 12:09:57.167786   64410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/enable-default-cni-230092/proxy-client.crt: {Name:mkc732e551f4599123dfa488565cb081b3cc9fed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 12:09:57.167962   64410 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/enable-default-cni-230092/proxy-client.key ...
	I0422 12:09:57.167980   64410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/enable-default-cni-230092/proxy-client.key: {Name:mka82a2306f029c8f1a5bd8dc1c4e27d26b3436d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 12:09:57.168159   64410 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem (1338 bytes)
	W0422 12:09:57.168205   64410 certs.go:480] ignoring /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945_empty.pem, impossibly tiny 0 bytes
	I0422 12:09:57.168220   64410 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem (1679 bytes)
	I0422 12:09:57.168250   64410 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem (1078 bytes)
	I0422 12:09:57.168293   64410 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem (1123 bytes)
	I0422 12:09:57.168325   64410 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem (1679 bytes)
	I0422 12:09:57.168397   64410 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem (1708 bytes)
	I0422 12:09:57.168999   64410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 12:09:57.217954   64410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 12:09:57.245956   64410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 12:09:57.274846   64410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0422 12:09:57.301218   64410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/enable-default-cni-230092/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0422 12:09:57.326672   64410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/enable-default-cni-230092/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0422 12:09:57.351699   64410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/enable-default-cni-230092/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 12:09:57.377075   64410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/enable-default-cni-230092/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 12:09:57.402923   64410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 12:09:57.428375   64410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem --> /usr/share/ca-certificates/14945.pem (1338 bytes)
	I0422 12:09:57.456714   64410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem --> /usr/share/ca-certificates/149452.pem (1708 bytes)
	I0422 12:09:57.484635   64410 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 12:09:57.504317   64410 ssh_runner.go:195] Run: openssl version
	I0422 12:09:57.511287   64410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 12:09:57.524659   64410 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 12:09:57.529619   64410 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0422 12:09:57.529693   64410 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 12:09:57.536401   64410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 12:09:57.550251   64410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14945.pem && ln -fs /usr/share/ca-certificates/14945.pem /etc/ssl/certs/14945.pem"
	I0422 12:09:57.564532   64410 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14945.pem
	I0422 12:09:57.569506   64410 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 10:51 /usr/share/ca-certificates/14945.pem
	I0422 12:09:57.569551   64410 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14945.pem
	I0422 12:09:57.575814   64410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14945.pem /etc/ssl/certs/51391683.0"
	I0422 12:09:57.589042   64410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149452.pem && ln -fs /usr/share/ca-certificates/149452.pem /etc/ssl/certs/149452.pem"
	I0422 12:09:57.601660   64410 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149452.pem
	I0422 12:09:57.607655   64410 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 10:51 /usr/share/ca-certificates/149452.pem
	I0422 12:09:57.607698   64410 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149452.pem
	I0422 12:09:57.615148   64410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149452.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 12:09:57.628277   64410 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 12:09:57.634356   64410 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0422 12:09:57.634409   64410 kubeadm.go:391] StartCluster: {Name:enable-default-cni-230092 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:enable-
default-cni-230092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 12:09:57.634501   64410 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 12:09:57.634556   64410 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 12:09:57.682591   64410 cri.go:89] found id: ""
	I0422 12:09:57.682670   64410 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0422 12:09:57.694626   64410 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 12:09:57.706732   64410 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 12:09:57.721285   64410 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 12:09:57.721307   64410 kubeadm.go:156] found existing configuration files:
	
	I0422 12:09:57.721357   64410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 12:09:57.733187   64410 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 12:09:57.733244   64410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 12:09:57.744440   64410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 12:09:57.756440   64410 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 12:09:57.756503   64410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 12:09:57.769526   64410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 12:09:57.780861   64410 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 12:09:57.780944   64410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 12:09:57.792635   64410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 12:09:57.803628   64410 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 12:09:57.803675   64410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 12:09:57.814833   64410 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 12:09:57.877044   64410 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0422 12:09:57.877117   64410 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 12:09:58.042908   64410 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 12:09:58.043068   64410 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 12:09:58.043223   64410 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 12:09:58.279733   64410 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 12:09:56.602380   64175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:09:57.102764   64175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:09:57.601855   64175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:09:58.102736   64175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:09:58.602699   64175 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:09:58.788221   64175 kubeadm.go:1107] duration metric: took 12.838964125s to wait for elevateKubeSystemPrivileges
	W0422 12:09:58.788261   64175 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0422 12:09:58.788270   64175 kubeadm.go:393] duration metric: took 26.348189607s to StartCluster
	I0422 12:09:58.788290   64175 settings.go:142] acquiring lock: {Name:mkd680667f0df4166491741d55b55ac111bb0138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 12:09:58.788367   64175 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18711-7633/kubeconfig
	I0422 12:09:58.789839   64175 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/kubeconfig: {Name:mkee6de4c6906fe5621e8aeac858a93219648db5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 12:09:58.790122   64175 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.61.177 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 12:09:58.791909   64175 out.go:177] * Verifying Kubernetes components...
	I0422 12:09:58.790277   64175 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0422 12:09:58.790294   64175 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0422 12:09:58.790496   64175 config.go:182] Loaded profile config "custom-flannel-230092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 12:09:58.793319   64175 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 12:09:58.793362   64175 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-230092"
	I0422 12:09:58.793381   64175 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-230092"
	I0422 12:09:58.793389   64175 addons.go:234] Setting addon storage-provisioner=true in "custom-flannel-230092"
	I0422 12:09:58.793409   64175 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-230092"
	I0422 12:09:58.793424   64175 host.go:66] Checking if "custom-flannel-230092" exists ...
	I0422 12:09:58.793806   64175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 12:09:58.793843   64175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 12:09:58.793950   64175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 12:09:58.793998   64175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 12:09:58.811497   64175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33815
	I0422 12:09:58.812131   64175 main.go:141] libmachine: () Calling .GetVersion
	I0422 12:09:58.812797   64175 main.go:141] libmachine: Using API Version  1
	I0422 12:09:58.812830   64175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 12:09:58.813238   64175 main.go:141] libmachine: () Calling .GetMachineName
	I0422 12:09:58.813859   64175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 12:09:58.813892   64175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44731
	I0422 12:09:58.813906   64175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 12:09:58.814433   64175 main.go:141] libmachine: () Calling .GetVersion
	I0422 12:09:58.815043   64175 main.go:141] libmachine: Using API Version  1
	I0422 12:09:58.815066   64175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 12:09:58.815407   64175 main.go:141] libmachine: () Calling .GetMachineName
	I0422 12:09:58.815574   64175 main.go:141] libmachine: (custom-flannel-230092) Calling .GetState
	I0422 12:09:58.819169   64175 addons.go:234] Setting addon default-storageclass=true in "custom-flannel-230092"
	I0422 12:09:58.819223   64175 host.go:66] Checking if "custom-flannel-230092" exists ...
	I0422 12:09:58.819619   64175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 12:09:58.819666   64175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 12:09:58.833668   64175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43211
	I0422 12:09:58.834211   64175 main.go:141] libmachine: () Calling .GetVersion
	I0422 12:09:58.834882   64175 main.go:141] libmachine: Using API Version  1
	I0422 12:09:58.834903   64175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 12:09:58.835260   64175 main.go:141] libmachine: () Calling .GetMachineName
	I0422 12:09:58.835468   64175 main.go:141] libmachine: (custom-flannel-230092) Calling .GetState
	I0422 12:09:58.837228   64175 main.go:141] libmachine: (custom-flannel-230092) Calling .DriverName
	I0422 12:09:58.839361   64175 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 12:09:58.304066   64410 out.go:204]   - Generating certificates and keys ...
	I0422 12:09:58.304256   64410 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 12:09:58.304349   64410 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 12:09:58.546765   64410 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0422 12:09:58.650670   64410 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0422 12:09:58.937691   64410 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0422 12:09:59.083530   64410 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0422 12:09:59.148533   64410 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0422 12:09:59.148720   64410 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-230092 localhost] and IPs [192.168.39.18 127.0.0.1 ::1]
	I0422 12:09:59.386337   64410 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0422 12:09:59.386531   64410 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-230092 localhost] and IPs [192.168.39.18 127.0.0.1 ::1]
	I0422 12:09:59.504289   66264 start.go:364] duration metric: took 15.420224651s to acquireMachinesLock for "flannel-230092"
	I0422 12:09:59.504359   66264 start.go:93] Provisioning new machine with config: &{Name:flannel-230092 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterN
ame:flannel-230092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 12:09:59.504499   66264 start.go:125] createHost starting for "" (driver="kvm2")
	I0422 12:09:58.840799   64175 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 12:09:58.840817   64175 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0422 12:09:58.840836   64175 main.go:141] libmachine: (custom-flannel-230092) Calling .GetSSHHostname
	I0422 12:09:58.838606   64175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33359
	I0422 12:09:58.843368   64175 main.go:141] libmachine: () Calling .GetVersion
	I0422 12:09:58.844129   64175 main.go:141] libmachine: Using API Version  1
	I0422 12:09:58.844145   64175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 12:09:58.844199   64175 main.go:141] libmachine: (custom-flannel-230092) DBG | domain custom-flannel-230092 has defined MAC address 52:54:00:d4:fc:29 in network mk-custom-flannel-230092
	I0422 12:09:58.844591   64175 main.go:141] libmachine: () Calling .GetMachineName
	I0422 12:09:58.844708   64175 main.go:141] libmachine: (custom-flannel-230092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:fc:29", ip: ""} in network mk-custom-flannel-230092: {Iface:virbr3 ExpiryTime:2024-04-22 13:09:09 +0000 UTC Type:0 Mac:52:54:00:d4:fc:29 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:custom-flannel-230092 Clientid:01:52:54:00:d4:fc:29}
	I0422 12:09:58.844808   64175 main.go:141] libmachine: (custom-flannel-230092) DBG | domain custom-flannel-230092 has defined IP address 192.168.61.177 and MAC address 52:54:00:d4:fc:29 in network mk-custom-flannel-230092
	I0422 12:09:58.845000   64175 main.go:141] libmachine: (custom-flannel-230092) Calling .GetSSHPort
	I0422 12:09:58.845196   64175 main.go:141] libmachine: (custom-flannel-230092) Calling .GetSSHKeyPath
	I0422 12:09:58.845340   64175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 12:09:58.845344   64175 main.go:141] libmachine: (custom-flannel-230092) Calling .GetSSHUsername
	I0422 12:09:58.845383   64175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 12:09:58.845541   64175 sshutil.go:53] new ssh client: &{IP:192.168.61.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/custom-flannel-230092/id_rsa Username:docker}
	I0422 12:09:58.865487   64175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34003
	I0422 12:09:58.866337   64175 main.go:141] libmachine: () Calling .GetVersion
	I0422 12:09:58.866954   64175 main.go:141] libmachine: Using API Version  1
	I0422 12:09:58.866974   64175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 12:09:58.867366   64175 main.go:141] libmachine: () Calling .GetMachineName
	I0422 12:09:58.867786   64175 main.go:141] libmachine: (custom-flannel-230092) Calling .GetState
	I0422 12:09:58.872173   64175 main.go:141] libmachine: (custom-flannel-230092) Calling .DriverName
	I0422 12:09:58.873051   64175 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0422 12:09:58.873068   64175 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0422 12:09:58.873087   64175 main.go:141] libmachine: (custom-flannel-230092) Calling .GetSSHHostname
	I0422 12:09:58.884576   64175 main.go:141] libmachine: (custom-flannel-230092) DBG | domain custom-flannel-230092 has defined MAC address 52:54:00:d4:fc:29 in network mk-custom-flannel-230092
	I0422 12:09:58.885077   64175 main.go:141] libmachine: (custom-flannel-230092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:fc:29", ip: ""} in network mk-custom-flannel-230092: {Iface:virbr3 ExpiryTime:2024-04-22 13:09:09 +0000 UTC Type:0 Mac:52:54:00:d4:fc:29 Iaid: IPaddr:192.168.61.177 Prefix:24 Hostname:custom-flannel-230092 Clientid:01:52:54:00:d4:fc:29}
	I0422 12:09:58.885100   64175 main.go:141] libmachine: (custom-flannel-230092) DBG | domain custom-flannel-230092 has defined IP address 192.168.61.177 and MAC address 52:54:00:d4:fc:29 in network mk-custom-flannel-230092
	I0422 12:09:58.885198   64175 main.go:141] libmachine: (custom-flannel-230092) Calling .GetSSHPort
	I0422 12:09:58.888745   64175 main.go:141] libmachine: (custom-flannel-230092) Calling .GetSSHKeyPath
	I0422 12:09:58.888936   64175 main.go:141] libmachine: (custom-flannel-230092) Calling .GetSSHUsername
	I0422 12:09:58.889114   64175 sshutil.go:53] new ssh client: &{IP:192.168.61.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/custom-flannel-230092/id_rsa Username:docker}
	I0422 12:09:59.172984   64175 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 12:09:59.173036   64175 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0422 12:09:59.204382   64175 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 12:09:59.207878   64175 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 12:09:59.270558   64175 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-230092" to be "Ready" ...
	I0422 12:10:00.326426   64175 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.153361075s)
	I0422 12:10:00.326457   64175 start.go:946] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0422 12:10:00.327825   64175 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.123408878s)
	I0422 12:10:00.327857   64175 main.go:141] libmachine: Making call to close driver server
	I0422 12:10:00.327875   64175 main.go:141] libmachine: (custom-flannel-230092) Calling .Close
	I0422 12:10:00.328281   64175 main.go:141] libmachine: Successfully made call to close driver server
	I0422 12:10:00.328298   64175 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 12:10:00.328308   64175 main.go:141] libmachine: Making call to close driver server
	I0422 12:10:00.328316   64175 main.go:141] libmachine: (custom-flannel-230092) Calling .Close
	I0422 12:10:00.328575   64175 main.go:141] libmachine: Successfully made call to close driver server
	I0422 12:10:00.328588   64175 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 12:10:00.671303   64175 main.go:141] libmachine: Making call to close driver server
	I0422 12:10:00.671332   64175 main.go:141] libmachine: (custom-flannel-230092) Calling .Close
	I0422 12:10:00.671674   64175 main.go:141] libmachine: (custom-flannel-230092) DBG | Closing plugin on server side
	I0422 12:10:00.671720   64175 main.go:141] libmachine: Successfully made call to close driver server
	I0422 12:10:00.671728   64175 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 12:10:00.808882   64175 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.600967505s)
	I0422 12:10:00.808933   64175 main.go:141] libmachine: Making call to close driver server
	I0422 12:10:00.808945   64175 main.go:141] libmachine: (custom-flannel-230092) Calling .Close
	I0422 12:10:00.809282   64175 main.go:141] libmachine: Successfully made call to close driver server
	I0422 12:10:00.809299   64175 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 12:10:00.809308   64175 main.go:141] libmachine: Making call to close driver server
	I0422 12:10:00.809316   64175 main.go:141] libmachine: (custom-flannel-230092) Calling .Close
	I0422 12:10:00.809652   64175 main.go:141] libmachine: (custom-flannel-230092) DBG | Closing plugin on server side
	I0422 12:10:00.809698   64175 main.go:141] libmachine: Successfully made call to close driver server
	I0422 12:10:00.809714   64175 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 12:10:00.813815   64175 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0422 12:09:59.827918   64410 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0422 12:09:59.963554   64410 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0422 12:10:00.152393   64410 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0422 12:10:00.152730   64410 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 12:10:00.330434   64410 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 12:10:00.480888   64410 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0422 12:10:00.564424   64410 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 12:10:01.327368   64410 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 12:10:01.393244   64410 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 12:10:01.393931   64410 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 12:10:01.396558   64410 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 12:10:00.815422   64175 addons.go:505] duration metric: took 2.025128106s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0422 12:10:00.831624   64175 kapi.go:248] "coredns" deployment in "kube-system" namespace and "custom-flannel-230092" context rescaled to 1 replicas
	I0422 12:10:01.275208   64175 node_ready.go:53] node "custom-flannel-230092" has status "Ready":"False"
	I0422 12:09:58.877566   64536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 12:09:58.877586   64536 machine.go:97] duration metric: took 8.96371902s to provisionDockerMachine
	I0422 12:09:58.877599   64536 start.go:293] postStartSetup for "kubernetes-upgrade-643419" (driver="kvm2")
	I0422 12:09:58.877651   64536 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 12:09:58.877674   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .DriverName
	I0422 12:09:58.877938   64536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 12:09:58.877960   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHHostname
	I0422 12:09:58.881833   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:09:58.882238   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:d0:37", ip: ""} in network mk-kubernetes-upgrade-643419: {Iface:virbr2 ExpiryTime:2024-04-22 13:08:30 +0000 UTC Type:0 Mac:52:54:00:8b:d0:37 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-643419 Clientid:01:52:54:00:8b:d0:37}
	I0422 12:09:58.882270   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined IP address 192.168.50.54 and MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:09:58.882512   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHPort
	I0422 12:09:58.882666   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHKeyPath
	I0422 12:09:58.882777   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHUsername
	I0422 12:09:58.882877   64536 sshutil.go:53] new ssh client: &{IP:192.168.50.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/kubernetes-upgrade-643419/id_rsa Username:docker}
	I0422 12:09:58.981304   64536 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 12:09:58.987556   64536 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 12:09:58.987583   64536 filesync.go:126] Scanning /home/jenkins/minikube-integration/18711-7633/.minikube/addons for local assets ...
	I0422 12:09:58.987652   64536 filesync.go:126] Scanning /home/jenkins/minikube-integration/18711-7633/.minikube/files for local assets ...
	I0422 12:09:58.987743   64536 filesync.go:149] local asset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> 149452.pem in /etc/ssl/certs
	I0422 12:09:58.987852   64536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 12:09:59.025788   64536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem --> /etc/ssl/certs/149452.pem (1708 bytes)
	I0422 12:09:59.164627   64536 start.go:296] duration metric: took 287.012805ms for postStartSetup
	I0422 12:09:59.164671   64536 fix.go:56] duration metric: took 9.274267532s for fixHost
	I0422 12:09:59.164696   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHHostname
	I0422 12:09:59.168326   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:09:59.168764   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:d0:37", ip: ""} in network mk-kubernetes-upgrade-643419: {Iface:virbr2 ExpiryTime:2024-04-22 13:08:30 +0000 UTC Type:0 Mac:52:54:00:8b:d0:37 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-643419 Clientid:01:52:54:00:8b:d0:37}
	I0422 12:09:59.168820   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined IP address 192.168.50.54 and MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:09:59.169067   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHPort
	I0422 12:09:59.169271   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHKeyPath
	I0422 12:09:59.169467   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHKeyPath
	I0422 12:09:59.169619   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHUsername
	I0422 12:09:59.169803   64536 main.go:141] libmachine: Using SSH client type: native
	I0422 12:09:59.170045   64536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.54 22 <nil> <nil>}
	I0422 12:09:59.170062   64536 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 12:09:59.504145   64536 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713787799.495895017
	
	I0422 12:09:59.504170   64536 fix.go:216] guest clock: 1713787799.495895017
	I0422 12:09:59.504179   64536 fix.go:229] Guest: 2024-04-22 12:09:59.495895017 +0000 UTC Remote: 2024-04-22 12:09:59.164676407 +0000 UTC m=+56.662059313 (delta=331.21861ms)
	I0422 12:09:59.504206   64536 fix.go:200] guest clock delta is within tolerance: 331.21861ms
	I0422 12:09:59.504213   64536 start.go:83] releasing machines lock for "kubernetes-upgrade-643419", held for 9.613842757s
	I0422 12:09:59.504266   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .DriverName
	I0422 12:09:59.504520   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetIP
	I0422 12:09:59.507696   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:09:59.508129   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:d0:37", ip: ""} in network mk-kubernetes-upgrade-643419: {Iface:virbr2 ExpiryTime:2024-04-22 13:08:30 +0000 UTC Type:0 Mac:52:54:00:8b:d0:37 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-643419 Clientid:01:52:54:00:8b:d0:37}
	I0422 12:09:59.508241   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined IP address 192.168.50.54 and MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:09:59.508654   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .DriverName
	I0422 12:09:59.509184   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .DriverName
	I0422 12:09:59.509336   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .DriverName
	I0422 12:09:59.509424   64536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 12:09:59.509460   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHHostname
	I0422 12:09:59.509792   64536 ssh_runner.go:195] Run: cat /version.json
	I0422 12:09:59.509814   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHHostname
	I0422 12:09:59.524159   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:09:59.524791   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:d0:37", ip: ""} in network mk-kubernetes-upgrade-643419: {Iface:virbr2 ExpiryTime:2024-04-22 13:08:30 +0000 UTC Type:0 Mac:52:54:00:8b:d0:37 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-643419 Clientid:01:52:54:00:8b:d0:37}
	I0422 12:09:59.524834   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined IP address 192.168.50.54 and MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:09:59.524881   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:09:59.525134   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHPort
	I0422 12:09:59.525373   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHKeyPath
	I0422 12:09:59.525429   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:d0:37", ip: ""} in network mk-kubernetes-upgrade-643419: {Iface:virbr2 ExpiryTime:2024-04-22 13:08:30 +0000 UTC Type:0 Mac:52:54:00:8b:d0:37 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-643419 Clientid:01:52:54:00:8b:d0:37}
	I0422 12:09:59.525455   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined IP address 192.168.50.54 and MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:09:59.525598   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHUsername
	I0422 12:09:59.525693   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHPort
	I0422 12:09:59.525872   64536 sshutil.go:53] new ssh client: &{IP:192.168.50.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/kubernetes-upgrade-643419/id_rsa Username:docker}
	I0422 12:09:59.526159   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHKeyPath
	I0422 12:09:59.526347   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetSSHUsername
	I0422 12:09:59.526545   64536 sshutil.go:53] new ssh client: &{IP:192.168.50.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/kubernetes-upgrade-643419/id_rsa Username:docker}
	I0422 12:09:59.826232   64536 ssh_runner.go:195] Run: systemctl --version
	I0422 12:10:00.005024   64536 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 12:10:00.453502   64536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 12:10:00.467421   64536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 12:10:00.467482   64536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 12:10:00.495100   64536 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0422 12:10:00.495127   64536 start.go:494] detecting cgroup driver to use...
	I0422 12:10:00.495193   64536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 12:10:00.549233   64536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 12:10:00.602837   64536 docker.go:217] disabling cri-docker service (if available) ...
	I0422 12:10:00.602901   64536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 12:10:00.679953   64536 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 12:10:00.779824   64536 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 12:10:01.159733   64536 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 12:10:01.588006   64536 docker.go:233] disabling docker service ...
	I0422 12:10:01.588097   64536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 12:10:01.641003   64536 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 12:10:01.673140   64536 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 12:10:01.938064   64536 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 12:10:02.200721   64536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 12:10:02.220484   64536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 12:10:02.256416   64536 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 12:10:02.256488   64536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 12:10:02.275974   64536 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 12:10:02.276047   64536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 12:10:02.295819   64536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 12:10:02.311370   64536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 12:10:02.328022   64536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 12:10:02.346590   64536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 12:10:02.361496   64536 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 12:10:02.380730   64536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 12:10:02.395324   64536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 12:10:02.433211   64536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 12:10:02.450772   64536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 12:09:59.507443   66264 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0422 12:09:59.507686   66264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 12:09:59.507743   66264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 12:09:59.528111   66264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36405
	I0422 12:09:59.528590   66264 main.go:141] libmachine: () Calling .GetVersion
	I0422 12:09:59.529348   66264 main.go:141] libmachine: Using API Version  1
	I0422 12:09:59.529379   66264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 12:09:59.529802   66264 main.go:141] libmachine: () Calling .GetMachineName
	I0422 12:09:59.530061   66264 main.go:141] libmachine: (flannel-230092) Calling .GetMachineName
	I0422 12:09:59.530243   66264 main.go:141] libmachine: (flannel-230092) Calling .DriverName
	I0422 12:09:59.530394   66264 start.go:159] libmachine.API.Create for "flannel-230092" (driver="kvm2")
	I0422 12:09:59.530444   66264 client.go:168] LocalClient.Create starting
	I0422 12:09:59.530481   66264 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem
	I0422 12:09:59.530518   66264 main.go:141] libmachine: Decoding PEM data...
	I0422 12:09:59.530537   66264 main.go:141] libmachine: Parsing certificate...
	I0422 12:09:59.530596   66264 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem
	I0422 12:09:59.530625   66264 main.go:141] libmachine: Decoding PEM data...
	I0422 12:09:59.530646   66264 main.go:141] libmachine: Parsing certificate...
	I0422 12:09:59.530679   66264 main.go:141] libmachine: Running pre-create checks...
	I0422 12:09:59.530693   66264 main.go:141] libmachine: (flannel-230092) Calling .PreCreateCheck
	I0422 12:09:59.531189   66264 main.go:141] libmachine: (flannel-230092) Calling .GetConfigRaw
	I0422 12:09:59.531667   66264 main.go:141] libmachine: Creating machine...
	I0422 12:09:59.531688   66264 main.go:141] libmachine: (flannel-230092) Calling .Create
	I0422 12:09:59.531920   66264 main.go:141] libmachine: (flannel-230092) Creating KVM machine...
	I0422 12:09:59.533430   66264 main.go:141] libmachine: (flannel-230092) DBG | found existing default KVM network
	I0422 12:09:59.535419   66264 main.go:141] libmachine: (flannel-230092) DBG | I0422 12:09:59.535210   66430 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:6a:ff:14} reservation:<nil>}
	I0422 12:09:59.536412   66264 main.go:141] libmachine: (flannel-230092) DBG | I0422 12:09:59.536333   66430 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:6f:d2:86} reservation:<nil>}
	I0422 12:09:59.537594   66264 main.go:141] libmachine: (flannel-230092) DBG | I0422 12:09:59.537494   66430 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:92:7a:ad} reservation:<nil>}
	I0422 12:09:59.539039   66264 main.go:141] libmachine: (flannel-230092) DBG | I0422 12:09:59.538964   66430 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000338590}
	I0422 12:09:59.539187   66264 main.go:141] libmachine: (flannel-230092) DBG | created network xml: 
	I0422 12:09:59.539205   66264 main.go:141] libmachine: (flannel-230092) DBG | <network>
	I0422 12:09:59.539505   66264 main.go:141] libmachine: (flannel-230092) DBG |   <name>mk-flannel-230092</name>
	I0422 12:09:59.539535   66264 main.go:141] libmachine: (flannel-230092) DBG |   <dns enable='no'/>
	I0422 12:09:59.539551   66264 main.go:141] libmachine: (flannel-230092) DBG |   
	I0422 12:09:59.539564   66264 main.go:141] libmachine: (flannel-230092) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0422 12:09:59.539579   66264 main.go:141] libmachine: (flannel-230092) DBG |     <dhcp>
	I0422 12:09:59.539591   66264 main.go:141] libmachine: (flannel-230092) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0422 12:09:59.539599   66264 main.go:141] libmachine: (flannel-230092) DBG |     </dhcp>
	I0422 12:09:59.539607   66264 main.go:141] libmachine: (flannel-230092) DBG |   </ip>
	I0422 12:09:59.539619   66264 main.go:141] libmachine: (flannel-230092) DBG |   
	I0422 12:09:59.539626   66264 main.go:141] libmachine: (flannel-230092) DBG | </network>
	I0422 12:09:59.539643   66264 main.go:141] libmachine: (flannel-230092) DBG | 
	I0422 12:09:59.546979   66264 main.go:141] libmachine: (flannel-230092) DBG | trying to create private KVM network mk-flannel-230092 192.168.72.0/24...
	I0422 12:09:59.664789   66264 main.go:141] libmachine: (flannel-230092) DBG | private KVM network mk-flannel-230092 192.168.72.0/24 created
	I0422 12:09:59.664819   66264 main.go:141] libmachine: (flannel-230092) DBG | I0422 12:09:59.664728   66430 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 12:09:59.664843   66264 main.go:141] libmachine: (flannel-230092) Setting up store path in /home/jenkins/minikube-integration/18711-7633/.minikube/machines/flannel-230092 ...
	I0422 12:09:59.664871   66264 main.go:141] libmachine: (flannel-230092) Building disk image from file:///home/jenkins/minikube-integration/18711-7633/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0422 12:09:59.664906   66264 main.go:141] libmachine: (flannel-230092) Downloading /home/jenkins/minikube-integration/18711-7633/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18711-7633/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso...
	I0422 12:09:59.963536   66264 main.go:141] libmachine: (flannel-230092) DBG | I0422 12:09:59.963397   66430 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/flannel-230092/id_rsa...
	I0422 12:10:00.358311   66264 main.go:141] libmachine: (flannel-230092) DBG | I0422 12:10:00.358166   66430 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/flannel-230092/flannel-230092.rawdisk...
	I0422 12:10:00.358351   66264 main.go:141] libmachine: (flannel-230092) DBG | Writing magic tar header
	I0422 12:10:00.358366   66264 main.go:141] libmachine: (flannel-230092) DBG | Writing SSH key tar header
	I0422 12:10:00.358379   66264 main.go:141] libmachine: (flannel-230092) DBG | I0422 12:10:00.358309   66430 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18711-7633/.minikube/machines/flannel-230092 ...
	I0422 12:10:00.358499   66264 main.go:141] libmachine: (flannel-230092) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/flannel-230092
	I0422 12:10:00.358521   66264 main.go:141] libmachine: (flannel-230092) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633/.minikube/machines
	I0422 12:10:00.358533   66264 main.go:141] libmachine: (flannel-230092) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633/.minikube/machines/flannel-230092 (perms=drwx------)
	I0422 12:10:00.358547   66264 main.go:141] libmachine: (flannel-230092) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633/.minikube/machines (perms=drwxr-xr-x)
	I0422 12:10:00.358567   66264 main.go:141] libmachine: (flannel-230092) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633/.minikube (perms=drwxr-xr-x)
	I0422 12:10:00.358582   66264 main.go:141] libmachine: (flannel-230092) Setting executable bit set on /home/jenkins/minikube-integration/18711-7633 (perms=drwxrwxr-x)
	I0422 12:10:00.358592   66264 main.go:141] libmachine: (flannel-230092) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0422 12:10:00.358604   66264 main.go:141] libmachine: (flannel-230092) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0422 12:10:00.358618   66264 main.go:141] libmachine: (flannel-230092) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 12:10:00.358627   66264 main.go:141] libmachine: (flannel-230092) Creating domain...
	I0422 12:10:00.358640   66264 main.go:141] libmachine: (flannel-230092) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18711-7633
	I0422 12:10:00.358649   66264 main.go:141] libmachine: (flannel-230092) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0422 12:10:00.358659   66264 main.go:141] libmachine: (flannel-230092) DBG | Checking permissions on dir: /home/jenkins
	I0422 12:10:00.358667   66264 main.go:141] libmachine: (flannel-230092) DBG | Checking permissions on dir: /home
	I0422 12:10:00.358678   66264 main.go:141] libmachine: (flannel-230092) DBG | Skipping /home - not owner
	I0422 12:10:00.359940   66264 main.go:141] libmachine: (flannel-230092) define libvirt domain using xml: 
	I0422 12:10:00.359965   66264 main.go:141] libmachine: (flannel-230092) <domain type='kvm'>
	I0422 12:10:00.359976   66264 main.go:141] libmachine: (flannel-230092)   <name>flannel-230092</name>
	I0422 12:10:00.359983   66264 main.go:141] libmachine: (flannel-230092)   <memory unit='MiB'>3072</memory>
	I0422 12:10:00.359992   66264 main.go:141] libmachine: (flannel-230092)   <vcpu>2</vcpu>
	I0422 12:10:00.359999   66264 main.go:141] libmachine: (flannel-230092)   <features>
	I0422 12:10:00.360009   66264 main.go:141] libmachine: (flannel-230092)     <acpi/>
	I0422 12:10:00.360019   66264 main.go:141] libmachine: (flannel-230092)     <apic/>
	I0422 12:10:00.360027   66264 main.go:141] libmachine: (flannel-230092)     <pae/>
	I0422 12:10:00.360035   66264 main.go:141] libmachine: (flannel-230092)     
	I0422 12:10:00.360063   66264 main.go:141] libmachine: (flannel-230092)   </features>
	I0422 12:10:00.360084   66264 main.go:141] libmachine: (flannel-230092)   <cpu mode='host-passthrough'>
	I0422 12:10:00.360093   66264 main.go:141] libmachine: (flannel-230092)   
	I0422 12:10:00.360104   66264 main.go:141] libmachine: (flannel-230092)   </cpu>
	I0422 12:10:00.360111   66264 main.go:141] libmachine: (flannel-230092)   <os>
	I0422 12:10:00.360116   66264 main.go:141] libmachine: (flannel-230092)     <type>hvm</type>
	I0422 12:10:00.360124   66264 main.go:141] libmachine: (flannel-230092)     <boot dev='cdrom'/>
	I0422 12:10:00.360131   66264 main.go:141] libmachine: (flannel-230092)     <boot dev='hd'/>
	I0422 12:10:00.360141   66264 main.go:141] libmachine: (flannel-230092)     <bootmenu enable='no'/>
	I0422 12:10:00.360147   66264 main.go:141] libmachine: (flannel-230092)   </os>
	I0422 12:10:00.360153   66264 main.go:141] libmachine: (flannel-230092)   <devices>
	I0422 12:10:00.360160   66264 main.go:141] libmachine: (flannel-230092)     <disk type='file' device='cdrom'>
	I0422 12:10:00.360174   66264 main.go:141] libmachine: (flannel-230092)       <source file='/home/jenkins/minikube-integration/18711-7633/.minikube/machines/flannel-230092/boot2docker.iso'/>
	I0422 12:10:00.360182   66264 main.go:141] libmachine: (flannel-230092)       <target dev='hdc' bus='scsi'/>
	I0422 12:10:00.360190   66264 main.go:141] libmachine: (flannel-230092)       <readonly/>
	I0422 12:10:00.360211   66264 main.go:141] libmachine: (flannel-230092)     </disk>
	I0422 12:10:00.360223   66264 main.go:141] libmachine: (flannel-230092)     <disk type='file' device='disk'>
	I0422 12:10:00.360233   66264 main.go:141] libmachine: (flannel-230092)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0422 12:10:00.360246   66264 main.go:141] libmachine: (flannel-230092)       <source file='/home/jenkins/minikube-integration/18711-7633/.minikube/machines/flannel-230092/flannel-230092.rawdisk'/>
	I0422 12:10:00.360254   66264 main.go:141] libmachine: (flannel-230092)       <target dev='hda' bus='virtio'/>
	I0422 12:10:00.360262   66264 main.go:141] libmachine: (flannel-230092)     </disk>
	I0422 12:10:00.360269   66264 main.go:141] libmachine: (flannel-230092)     <interface type='network'>
	I0422 12:10:00.360278   66264 main.go:141] libmachine: (flannel-230092)       <source network='mk-flannel-230092'/>
	I0422 12:10:00.360285   66264 main.go:141] libmachine: (flannel-230092)       <model type='virtio'/>
	I0422 12:10:00.360293   66264 main.go:141] libmachine: (flannel-230092)     </interface>
	I0422 12:10:00.360300   66264 main.go:141] libmachine: (flannel-230092)     <interface type='network'>
	I0422 12:10:00.360309   66264 main.go:141] libmachine: (flannel-230092)       <source network='default'/>
	I0422 12:10:00.360316   66264 main.go:141] libmachine: (flannel-230092)       <model type='virtio'/>
	I0422 12:10:00.360325   66264 main.go:141] libmachine: (flannel-230092)     </interface>
	I0422 12:10:00.360332   66264 main.go:141] libmachine: (flannel-230092)     <serial type='pty'>
	I0422 12:10:00.360340   66264 main.go:141] libmachine: (flannel-230092)       <target port='0'/>
	I0422 12:10:00.360347   66264 main.go:141] libmachine: (flannel-230092)     </serial>
	I0422 12:10:00.360356   66264 main.go:141] libmachine: (flannel-230092)     <console type='pty'>
	I0422 12:10:00.360369   66264 main.go:141] libmachine: (flannel-230092)       <target type='serial' port='0'/>
	I0422 12:10:00.360377   66264 main.go:141] libmachine: (flannel-230092)     </console>
	I0422 12:10:00.360384   66264 main.go:141] libmachine: (flannel-230092)     <rng model='virtio'>
	I0422 12:10:00.360393   66264 main.go:141] libmachine: (flannel-230092)       <backend model='random'>/dev/random</backend>
	I0422 12:10:00.360400   66264 main.go:141] libmachine: (flannel-230092)     </rng>
	I0422 12:10:00.360407   66264 main.go:141] libmachine: (flannel-230092)     
	I0422 12:10:00.360412   66264 main.go:141] libmachine: (flannel-230092)     
	I0422 12:10:00.360420   66264 main.go:141] libmachine: (flannel-230092)   </devices>
	I0422 12:10:00.360427   66264 main.go:141] libmachine: (flannel-230092) </domain>
	I0422 12:10:00.360437   66264 main.go:141] libmachine: (flannel-230092) 
	I0422 12:10:00.462875   66264 main.go:141] libmachine: (flannel-230092) DBG | domain flannel-230092 has defined MAC address 52:54:00:26:34:c0 in network default
	I0422 12:10:00.463711   66264 main.go:141] libmachine: (flannel-230092) Ensuring networks are active...
	I0422 12:10:00.463743   66264 main.go:141] libmachine: (flannel-230092) DBG | domain flannel-230092 has defined MAC address 52:54:00:df:a8:70 in network mk-flannel-230092
	I0422 12:10:00.464600   66264 main.go:141] libmachine: (flannel-230092) Ensuring network default is active
	I0422 12:10:00.464965   66264 main.go:141] libmachine: (flannel-230092) Ensuring network mk-flannel-230092 is active
	I0422 12:10:00.465599   66264 main.go:141] libmachine: (flannel-230092) Getting domain xml...
	I0422 12:10:00.466508   66264 main.go:141] libmachine: (flannel-230092) Creating domain...
	I0422 12:10:01.886533   66264 main.go:141] libmachine: (flannel-230092) Waiting to get IP...
	I0422 12:10:01.887633   66264 main.go:141] libmachine: (flannel-230092) DBG | domain flannel-230092 has defined MAC address 52:54:00:df:a8:70 in network mk-flannel-230092
	I0422 12:10:01.888362   66264 main.go:141] libmachine: (flannel-230092) DBG | unable to find current IP address of domain flannel-230092 in network mk-flannel-230092
	I0422 12:10:01.888396   66264 main.go:141] libmachine: (flannel-230092) DBG | I0422 12:10:01.888344   66430 retry.go:31] will retry after 233.939964ms: waiting for machine to come up
	I0422 12:10:02.124025   66264 main.go:141] libmachine: (flannel-230092) DBG | domain flannel-230092 has defined MAC address 52:54:00:df:a8:70 in network mk-flannel-230092
	I0422 12:10:02.124676   66264 main.go:141] libmachine: (flannel-230092) DBG | unable to find current IP address of domain flannel-230092 in network mk-flannel-230092
	I0422 12:10:02.124727   66264 main.go:141] libmachine: (flannel-230092) DBG | I0422 12:10:02.124652   66430 retry.go:31] will retry after 269.041969ms: waiting for machine to come up
	I0422 12:10:02.395327   66264 main.go:141] libmachine: (flannel-230092) DBG | domain flannel-230092 has defined MAC address 52:54:00:df:a8:70 in network mk-flannel-230092
	I0422 12:10:02.395960   66264 main.go:141] libmachine: (flannel-230092) DBG | unable to find current IP address of domain flannel-230092 in network mk-flannel-230092
	I0422 12:10:02.395981   66264 main.go:141] libmachine: (flannel-230092) DBG | I0422 12:10:02.395902   66430 retry.go:31] will retry after 486.32269ms: waiting for machine to come up
	I0422 12:10:02.883743   66264 main.go:141] libmachine: (flannel-230092) DBG | domain flannel-230092 has defined MAC address 52:54:00:df:a8:70 in network mk-flannel-230092
	I0422 12:10:02.884298   66264 main.go:141] libmachine: (flannel-230092) DBG | unable to find current IP address of domain flannel-230092 in network mk-flannel-230092
	I0422 12:10:02.884400   66264 main.go:141] libmachine: (flannel-230092) DBG | I0422 12:10:02.884276   66430 retry.go:31] will retry after 406.989896ms: waiting for machine to come up
	I0422 12:10:03.293067   66264 main.go:141] libmachine: (flannel-230092) DBG | domain flannel-230092 has defined MAC address 52:54:00:df:a8:70 in network mk-flannel-230092
	I0422 12:10:03.293683   66264 main.go:141] libmachine: (flannel-230092) DBG | unable to find current IP address of domain flannel-230092 in network mk-flannel-230092
	I0422 12:10:03.293715   66264 main.go:141] libmachine: (flannel-230092) DBG | I0422 12:10:03.293630   66430 retry.go:31] will retry after 543.129675ms: waiting for machine to come up
	I0422 12:10:03.837857   66264 main.go:141] libmachine: (flannel-230092) DBG | domain flannel-230092 has defined MAC address 52:54:00:df:a8:70 in network mk-flannel-230092
	I0422 12:10:03.838448   66264 main.go:141] libmachine: (flannel-230092) DBG | unable to find current IP address of domain flannel-230092 in network mk-flannel-230092
	I0422 12:10:03.838484   66264 main.go:141] libmachine: (flannel-230092) DBG | I0422 12:10:03.838399   66430 retry.go:31] will retry after 612.98483ms: waiting for machine to come up
	I0422 12:10:01.398336   64410 out.go:204]   - Booting up control plane ...
	I0422 12:10:01.398455   64410 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 12:10:01.400672   64410 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 12:10:01.401570   64410 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 12:10:01.419847   64410 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 12:10:01.421099   64410 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 12:10:01.421164   64410 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 12:10:01.586839   64410 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0422 12:10:01.586956   64410 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0422 12:10:02.090880   64410 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 503.202625ms
	I0422 12:10:02.091013   64410 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0422 12:10:03.775879   64175 node_ready.go:53] node "custom-flannel-230092" has status "Ready":"False"
	I0422 12:10:06.275450   64175 node_ready.go:53] node "custom-flannel-230092" has status "Ready":"False"
	I0422 12:10:02.725091   64536 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 12:10:07.593767   64410 kubeadm.go:309] [api-check] The API server is healthy after 5.503169736s
	I0422 12:10:07.618242   64410 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0422 12:10:07.641482   64410 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0422 12:10:07.692414   64410 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0422 12:10:07.692669   64410 kubeadm.go:309] [mark-control-plane] Marking the node enable-default-cni-230092 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0422 12:10:07.709989   64410 kubeadm.go:309] [bootstrap-token] Using token: nuhh42.t1yxh6b3c53mmtd5
	I0422 12:10:07.711598   64410 out.go:204]   - Configuring RBAC rules ...
	I0422 12:10:07.711781   64410 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0422 12:10:07.719354   64410 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0422 12:10:07.733455   64410 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0422 12:10:07.739504   64410 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0422 12:10:07.752352   64410 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0422 12:10:07.757277   64410 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0422 12:10:08.003643   64410 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0422 12:10:08.446379   64410 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0422 12:10:09.001838   64410 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0422 12:10:09.002947   64410 kubeadm.go:309] 
	I0422 12:10:09.003071   64410 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0422 12:10:09.003094   64410 kubeadm.go:309] 
	I0422 12:10:09.003252   64410 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0422 12:10:09.003267   64410 kubeadm.go:309] 
	I0422 12:10:09.003308   64410 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0422 12:10:09.003407   64410 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0422 12:10:09.003489   64410 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0422 12:10:09.003510   64410 kubeadm.go:309] 
	I0422 12:10:09.003583   64410 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0422 12:10:09.003594   64410 kubeadm.go:309] 
	I0422 12:10:09.003667   64410 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0422 12:10:09.003677   64410 kubeadm.go:309] 
	I0422 12:10:09.003754   64410 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0422 12:10:09.003868   64410 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0422 12:10:09.003962   64410 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0422 12:10:09.003971   64410 kubeadm.go:309] 
	I0422 12:10:09.004072   64410 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0422 12:10:09.004169   64410 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0422 12:10:09.004179   64410 kubeadm.go:309] 
	I0422 12:10:09.004306   64410 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token nuhh42.t1yxh6b3c53mmtd5 \
	I0422 12:10:09.004456   64410 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9af0990320e7d045d6d22d7e7276e7343f7b0ca44ae792f4916b4a79d803646f \
	I0422 12:10:09.004491   64410 kubeadm.go:309] 	--control-plane 
	I0422 12:10:09.004501   64410 kubeadm.go:309] 
	I0422 12:10:09.004617   64410 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0422 12:10:09.004628   64410 kubeadm.go:309] 
	I0422 12:10:09.004745   64410 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token nuhh42.t1yxh6b3c53mmtd5 \
	I0422 12:10:09.004897   64410 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9af0990320e7d045d6d22d7e7276e7343f7b0ca44ae792f4916b4a79d803646f 
	I0422 12:10:09.005057   64410 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 12:10:09.005122   64410 cni.go:84] Creating CNI manager for "bridge"
	I0422 12:10:09.007577   64410 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 12:10:04.453446   66264 main.go:141] libmachine: (flannel-230092) DBG | domain flannel-230092 has defined MAC address 52:54:00:df:a8:70 in network mk-flannel-230092
	I0422 12:10:04.454003   66264 main.go:141] libmachine: (flannel-230092) DBG | unable to find current IP address of domain flannel-230092 in network mk-flannel-230092
	I0422 12:10:04.454025   66264 main.go:141] libmachine: (flannel-230092) DBG | I0422 12:10:04.453956   66430 retry.go:31] will retry after 791.950215ms: waiting for machine to come up
	I0422 12:10:05.246952   66264 main.go:141] libmachine: (flannel-230092) DBG | domain flannel-230092 has defined MAC address 52:54:00:df:a8:70 in network mk-flannel-230092
	I0422 12:10:05.247490   66264 main.go:141] libmachine: (flannel-230092) DBG | unable to find current IP address of domain flannel-230092 in network mk-flannel-230092
	I0422 12:10:05.247518   66264 main.go:141] libmachine: (flannel-230092) DBG | I0422 12:10:05.247443   66430 retry.go:31] will retry after 1.16760538s: waiting for machine to come up
	I0422 12:10:06.416843   66264 main.go:141] libmachine: (flannel-230092) DBG | domain flannel-230092 has defined MAC address 52:54:00:df:a8:70 in network mk-flannel-230092
	I0422 12:10:06.417408   66264 main.go:141] libmachine: (flannel-230092) DBG | unable to find current IP address of domain flannel-230092 in network mk-flannel-230092
	I0422 12:10:06.417459   66264 main.go:141] libmachine: (flannel-230092) DBG | I0422 12:10:06.417357   66430 retry.go:31] will retry after 1.709072173s: waiting for machine to come up
	I0422 12:10:08.128380   66264 main.go:141] libmachine: (flannel-230092) DBG | domain flannel-230092 has defined MAC address 52:54:00:df:a8:70 in network mk-flannel-230092
	I0422 12:10:08.128998   66264 main.go:141] libmachine: (flannel-230092) DBG | unable to find current IP address of domain flannel-230092 in network mk-flannel-230092
	I0422 12:10:08.129036   66264 main.go:141] libmachine: (flannel-230092) DBG | I0422 12:10:08.128932   66430 retry.go:31] will retry after 1.791534423s: waiting for machine to come up
	I0422 12:10:09.009396   64410 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 12:10:09.025182   64410 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 12:10:09.047100   64410 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 12:10:09.047227   64410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:10:09.047242   64410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-230092 minikube.k8s.io/updated_at=2024_04_22T12_10_09_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437 minikube.k8s.io/name=enable-default-cni-230092 minikube.k8s.io/primary=true
	I0422 12:10:09.076119   64410 ops.go:34] apiserver oom_adj: -16
	I0422 12:10:09.191572   64410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:10:08.275684   64175 node_ready.go:53] node "custom-flannel-230092" has status "Ready":"False"
	I0422 12:10:09.275337   64175 node_ready.go:49] node "custom-flannel-230092" has status "Ready":"True"
	I0422 12:10:09.275361   64175 node_ready.go:38] duration metric: took 10.004759s for node "custom-flannel-230092" to be "Ready" ...
	I0422 12:10:09.275370   64175 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 12:10:09.298300   64175 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-tchd2" in "kube-system" namespace to be "Ready" ...
	I0422 12:10:11.306989   64175 pod_ready.go:102] pod "coredns-7db6d8ff4d-tchd2" in "kube-system" namespace has status "Ready":"False"
	I0422 12:10:12.869677   64536 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.144543337s)
	I0422 12:10:12.869713   64536 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 12:10:12.869763   64536 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 12:10:12.876766   64536 start.go:562] Will wait 60s for crictl version
	I0422 12:10:12.876848   64536 ssh_runner.go:195] Run: which crictl
	I0422 12:10:12.881566   64536 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 12:10:12.928395   64536 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 12:10:12.928476   64536 ssh_runner.go:195] Run: crio --version
	I0422 12:10:12.965927   64536 ssh_runner.go:195] Run: crio --version
	I0422 12:10:13.004651   64536 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 12:10:09.922595   66264 main.go:141] libmachine: (flannel-230092) DBG | domain flannel-230092 has defined MAC address 52:54:00:df:a8:70 in network mk-flannel-230092
	I0422 12:10:09.923135   66264 main.go:141] libmachine: (flannel-230092) DBG | unable to find current IP address of domain flannel-230092 in network mk-flannel-230092
	I0422 12:10:09.923167   66264 main.go:141] libmachine: (flannel-230092) DBG | I0422 12:10:09.923077   66430 retry.go:31] will retry after 2.28429062s: waiting for machine to come up
	I0422 12:10:12.210070   66264 main.go:141] libmachine: (flannel-230092) DBG | domain flannel-230092 has defined MAC address 52:54:00:df:a8:70 in network mk-flannel-230092
	I0422 12:10:12.210809   66264 main.go:141] libmachine: (flannel-230092) DBG | unable to find current IP address of domain flannel-230092 in network mk-flannel-230092
	I0422 12:10:12.210836   66264 main.go:141] libmachine: (flannel-230092) DBG | I0422 12:10:12.210776   66430 retry.go:31] will retry after 3.58435938s: waiting for machine to come up
	I0422 12:10:09.691731   64410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:10:10.192189   64410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:10:10.691631   64410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:10:11.191740   64410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:10:11.692045   64410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:10:12.191738   64410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:10:12.692331   64410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:10:13.192438   64410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:10:13.692463   64410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:10:14.191822   64410 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 12:10:13.309211   64175 pod_ready.go:102] pod "coredns-7db6d8ff4d-tchd2" in "kube-system" namespace has status "Ready":"False"
	I0422 12:10:15.808741   64175 pod_ready.go:102] pod "coredns-7db6d8ff4d-tchd2" in "kube-system" namespace has status "Ready":"False"
	I0422 12:10:13.006171   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) Calling .GetIP
	I0422 12:10:13.009368   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:10:13.009844   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:d0:37", ip: ""} in network mk-kubernetes-upgrade-643419: {Iface:virbr2 ExpiryTime:2024-04-22 13:08:30 +0000 UTC Type:0 Mac:52:54:00:8b:d0:37 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-643419 Clientid:01:52:54:00:8b:d0:37}
	I0422 12:10:13.009881   64536 main.go:141] libmachine: (kubernetes-upgrade-643419) DBG | domain kubernetes-upgrade-643419 has defined IP address 192.168.50.54 and MAC address 52:54:00:8b:d0:37 in network mk-kubernetes-upgrade-643419
	I0422 12:10:13.010126   64536 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0422 12:10:13.015191   64536 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-643419 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kube
rnetes-upgrade-643419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.54 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 12:10:13.015321   64536 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 12:10:13.015375   64536 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 12:10:13.073274   64536 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 12:10:13.073300   64536 crio.go:433] Images already preloaded, skipping extraction
	I0422 12:10:13.073361   64536 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 12:10:13.121304   64536 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 12:10:13.121333   64536 cache_images.go:84] Images are preloaded, skipping loading
	I0422 12:10:13.121342   64536 kubeadm.go:928] updating node { 192.168.50.54 8443 v1.30.0 crio true true} ...
	I0422 12:10:13.121490   64536 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-643419 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-643419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 12:10:13.121577   64536 ssh_runner.go:195] Run: crio config
	I0422 12:10:13.207011   64536 cni.go:84] Creating CNI manager for ""
	I0422 12:10:13.207043   64536 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 12:10:13.207062   64536 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 12:10:13.207086   64536 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.54 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-643419 NodeName:kubernetes-upgrade-643419 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 12:10:13.207248   64536 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-643419"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 12:10:13.207316   64536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 12:10:13.223657   64536 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 12:10:13.223737   64536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 12:10:13.239698   64536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0422 12:10:13.261744   64536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 12:10:13.284579   64536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0422 12:10:13.309350   64536 ssh_runner.go:195] Run: grep 192.168.50.54	control-plane.minikube.internal$ /etc/hosts
	I0422 12:10:13.314267   64536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 12:10:13.480503   64536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 12:10:13.498851   64536 certs.go:68] Setting up /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419 for IP: 192.168.50.54
	I0422 12:10:13.498874   64536 certs.go:194] generating shared ca certs ...
	I0422 12:10:13.498889   64536 certs.go:226] acquiring lock for ca certs: {Name:mk0b77082b88c771d0b00be5267ca31dfee6f85a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 12:10:13.499079   64536 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key
	I0422 12:10:13.499141   64536 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key
	I0422 12:10:13.499154   64536 certs.go:256] generating profile certs ...
	I0422 12:10:13.499256   64536 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/client.key
	I0422 12:10:13.499330   64536 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/apiserver.key.8993c292
	I0422 12:10:13.499381   64536 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/proxy-client.key
	I0422 12:10:13.499506   64536 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem (1338 bytes)
	W0422 12:10:13.499550   64536 certs.go:480] ignoring /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945_empty.pem, impossibly tiny 0 bytes
	I0422 12:10:13.499562   64536 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem (1679 bytes)
	I0422 12:10:13.499597   64536 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem (1078 bytes)
	I0422 12:10:13.499632   64536 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem (1123 bytes)
	I0422 12:10:13.499657   64536 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem (1679 bytes)
	I0422 12:10:13.499717   64536 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem (1708 bytes)
	I0422 12:10:13.500542   64536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 12:10:13.534138   64536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 12:10:13.564664   64536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 12:10:13.596812   64536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0422 12:10:13.629848   64536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0422 12:10:13.661834   64536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 12:10:13.698079   64536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 12:10:13.734377   64536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kubernetes-upgrade-643419/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0422 12:10:13.771996   64536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem --> /usr/share/ca-certificates/14945.pem (1338 bytes)
	I0422 12:10:13.810958   64536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem --> /usr/share/ca-certificates/149452.pem (1708 bytes)
	I0422 12:10:13.845795   64536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 12:10:13.877529   64536 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 12:10:13.902837   64536 ssh_runner.go:195] Run: openssl version
	I0422 12:10:13.911157   64536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14945.pem && ln -fs /usr/share/ca-certificates/14945.pem /etc/ssl/certs/14945.pem"
	I0422 12:10:13.928937   64536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14945.pem
	I0422 12:10:13.935839   64536 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 10:51 /usr/share/ca-certificates/14945.pem
	I0422 12:10:13.935930   64536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14945.pem
	I0422 12:10:13.942627   64536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14945.pem /etc/ssl/certs/51391683.0"
	I0422 12:10:13.955705   64536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149452.pem && ln -fs /usr/share/ca-certificates/149452.pem /etc/ssl/certs/149452.pem"
	I0422 12:10:13.970682   64536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149452.pem
	I0422 12:10:13.976408   64536 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 10:51 /usr/share/ca-certificates/149452.pem
	I0422 12:10:13.976459   64536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149452.pem
	I0422 12:10:13.983230   64536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149452.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 12:10:13.994422   64536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 12:10:14.012022   64536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 12:10:14.019132   64536 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0422 12:10:14.019194   64536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 12:10:14.025831   64536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 12:10:14.041840   64536 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 12:10:14.048786   64536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 12:10:14.057672   64536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 12:10:14.065571   64536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 12:10:14.074241   64536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 12:10:14.081352   64536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 12:10:14.088942   64536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 12:10:14.097627   64536 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-643419 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kuberne
tes-upgrade-643419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.54 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 12:10:14.097700   64536 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 12:10:14.097798   64536 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 12:10:14.156737   64536 cri.go:89] found id: "606571bae1abaef80d934f7dfb0700c226fda12eac40af2c409a97767d3cf0f7"
	I0422 12:10:14.156763   64536 cri.go:89] found id: "8e86dacd2ecd16c65d9ca463f87ba8e3fec3cf1fe1a35a1ed3d3daa72c548fb6"
	I0422 12:10:14.156795   64536 cri.go:89] found id: "3781f4b80f2f57a0511c41942497af23a9f54af621751c0ddd0bab959d66e588"
	I0422 12:10:14.156801   64536 cri.go:89] found id: "d71f3f9bb401787e9d943320b7cc41a90f771067b8d796d52452aadf2edf81e4"
	I0422 12:10:14.156808   64536 cri.go:89] found id: "66582e7e0490e4b3bac9df39abb41db81bc6fd008efbdcfd23049bc6901dab6f"
	I0422 12:10:14.156812   64536 cri.go:89] found id: "f672efd9ce3d4325971f42182486a362e48b2b644c4dbacb7b366af023345af5"
	I0422 12:10:14.156816   64536 cri.go:89] found id: "2fc6eddb9dd26c77399c0a6a306b7d1be56f2379ec8aa8a06781de9a1ee55fc5"
	I0422 12:10:14.156821   64536 cri.go:89] found id: "52b05948da7e06c4b1bc3819ab8f768db5581b33b0d8d656e97cdf114b0748d0"
	I0422 12:10:14.156825   64536 cri.go:89] found id: "da79c8547182f32f39c65a77e4c4ab8e16b303d9e6d93fd49a2a3471f528cf12"
	I0422 12:10:14.156837   64536 cri.go:89] found id: "4f8fcd0ae8e40536f8e096c03a2f3cf3f6f7ec78b41193204a207bac816d0760"
	I0422 12:10:14.156845   64536 cri.go:89] found id: "32be860feb3d175842a542386c5751dd2f7ab1a23ff6a85c9994fda134292637"
	I0422 12:10:14.156849   64536 cri.go:89] found id: "53fa77b779f51afb171c73a4897be01d968982b5d25e4776482a17de4c6d0683"
	I0422 12:10:14.156856   64536 cri.go:89] found id: "693dc51132dca6b30aabb60de8cfd6cceca7b867fc3d32818d53ffbb142001f5"
	I0422 12:10:14.156860   64536 cri.go:89] found id: "9c026b47fcdf9dc2435321a5c2c1b89368109a5174ecab4e4e2845e2dcb2fd3b"
	I0422 12:10:14.156865   64536 cri.go:89] found id: "9e5b3b2fa3faa8ecf87a114abae536021ff2d1a657716066ff1b0f4510ad0e9e"
	I0422 12:10:14.156868   64536 cri.go:89] found id: ""
	I0422 12:10:14.156911   64536 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 22 12:10:25 kubernetes-upgrade-643419 crio[3041]: time="2024-04-22 12:10:25.641164436Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713787825641130735,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9df11b5f-1130-4fb8-b538-7519dbc24d4a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 12:10:25 kubernetes-upgrade-643419 crio[3041]: time="2024-04-22 12:10:25.642164700Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8999917b-3ed1-4fa0-a6ec-6203613e079f name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:10:25 kubernetes-upgrade-643419 crio[3041]: time="2024-04-22 12:10:25.642409654Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8999917b-3ed1-4fa0-a6ec-6203613e079f name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:10:25 kubernetes-upgrade-643419 crio[3041]: time="2024-04-22 12:10:25.642985651Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f1b186e3bf179cc491a809e94d8bf7d760ade9f12d0a0764b705a1b1d265f914,PodSandboxId:eb187e3a6d01e55cf13d625124aefd81a3399424aa2c5b08186fe3678bc03cfe,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713787822451364789,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g2xxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ba95ac-a190-4bff-baa4-b80f7d3b92cf,},Annotations:map[string]string{io.kubernetes.container.hash: 56ed4bdc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d537b1139a553248a65cbe9c66c39e131dc7265fa3580a28cbebe03c6cfcbcf,PodSandboxId:e0946f992a35dcc984f4f151dc36f6d2eddc6546c4c0176dae3be74224cd3842,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713787822318729871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kmhk7,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 89cdefbd-b6d9-426c-955b-e88bb53d21df,},Annotations:map[string]string{io.kubernetes.container.hash: 93f9f0ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbeacaa374f4513fde93af8b82df22917f79ee35a7a8e588fc672ed11c44f470,PodSandboxId:f2ed17f216088e103ca1e3a2499b290300b5351330ea69a2c397c89cdfea9591,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAIN
ER_RUNNING,CreatedAt:1713787821727408148,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4vc65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b87fc7d8-46a8-464b-91f0-7815380bd783,},Annotations:map[string]string{io.kubernetes.container.hash: 3762a847,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f8e321b7bc08d667e1c232fc291278b95572d7308ed527ee58ab44d99ef2ad1,PodSandboxId:11267ecf1fcb43094b106dafa29b688dc82dda5ad1eae78bdf643afee3e5117c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171
3787821633577013,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fee0807-d5b5-4c11-97f8-ea09266f338e,},Annotations:map[string]string{io.kubernetes.container.hash: 2d82f6e7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5972e15b0607c17f9650d8e635c75fd8a897da7075d63dc76660f31cd7f253e,PodSandboxId:60f25c26fe361f6562403cc7f857e8a9d2b8960f2a278735eb7279d40997985c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:17137878169
05006533,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-643419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81c3de6290d40353d31a1ac9b781546a,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1873ea660182ff6abaf59bc9c0a7404c98e9e951fe03bba2c41c1e05facde4c,PodSandboxId:bb3025cc2cf6636e82d333e5d5a7f14ce9c176346b557b2d114f03d5a883b08a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:17137878
16802676890,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-643419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5cec128541cb3fa4a85efaf12c82af8,},Annotations:map[string]string{io.kubernetes.container.hash: de56998c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3f3db9ba1b5f9f3a1cfa0ef75e56b9a540ed0c68b8320c429b31756833a1240,PodSandboxId:a2d76941ec81b8a8b08f5ede4c4c2a0bb17a4682af14856d2e3d28cdf3d9b520,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713787816857413824,Labels:map[st
ring]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-643419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cbe4c124e8503fc35b91b7792c26a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3b9530dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c99f25ab8b2799beb531a653b1749a4100a360c87dbe6531af8209bb25ccc00,PodSandboxId:84bd442545a8890d7503b2044b4d2a6f0691e1b1c1b999688fd5949d0862cd0e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713787816726758616,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-643419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81180daaecda106ac085e185b6db7e93,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606571bae1abaef80d934f7dfb0700c226fda12eac40af2c409a97767d3cf0f7,PodSandboxId:b1466cf427b8b1942dfc0a5b5c3b7ae74199514d7d18086f9b144dd072e80aee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713787800995965443,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g2xxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ba95ac-a190-4bff-baa4-b80f7d3b92cf,},Annotations:map[string]string{io.kubernetes.container.hash: 56ed4bdc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e86dacd2ecd16c65d9ca463f87ba8e3fec3cf1fe1a35a1ed3d3daa72c548fb6,PodSandboxId:70e50e08351eb6428a4a67881d57420966bb18d28472167a7e7713faeb276202,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713787800703937825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kmhk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89cdefbd-b6d9-426c-955b-e88bb53d21df,},Annotations:map[string]string{io.kubernetes.container.hash: 93f9f0ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3781f4b80f2f57a0511c41942497af23a9f54af621751c0ddd0bab959d66e588,PodSandboxId:1d9a40fef3fa838e6497307869d6a4d3b938ba6666710a
adec8056b978cb081c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713787799959918982,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4vc65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b87fc7d8-46a8-464b-91f0-7815380bd783,},Annotations:map[string]string{io.kubernetes.container.hash: 3762a847,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71f3f9bb401787e9d943320b7cc41a90f771067b8d796d52452aadf2edf81e4,PodSandboxId:0de2198e4d5c9f9985d27e64137eeacb5efebc9ed4a34c9106d1fa16a4288b8b,Metadata:&Conta
inerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713787799936806699,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-643419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81c3de6290d40353d31a1ac9b781546a,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66582e7e0490e4b3bac9df39abb41db81bc6fd008efbdcfd23049bc6901dab6f,PodSandboxId:e16bbafb8c849ccf20703be74feea8571773e72852c3dcce86a8305d2eb
18e1d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713787799870814936,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-643419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81180daaecda106ac085e185b6db7e93,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f672efd9ce3d4325971f42182486a362e48b2b644c4dbacb7b366af023345af5,PodSandboxId:870c76ba20b7caefda7869d581ac90583944c69ed2690388b313d22ff73c46d3,
Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713787799827922335,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-643419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5cec128541cb3fa4a85efaf12c82af8,},Annotations:map[string]string{io.kubernetes.container.hash: de56998c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fc6eddb9dd26c77399c0a6a306b7d1be56f2379ec8aa8a06781de9a1ee55fc5,PodSandboxId:3961f4d99f12e7a16eb8f08076e569284bb13c55fe8f05df944a338c795cfe0d,Metadata:&ContainerMetadata{Name:kub
e-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713787799771670089,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-643419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cbe4c124e8503fc35b91b7792c26a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3b9530dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b05948da7e06c4b1bc3819ab8f768db5581b33b0d8d656e97cdf114b0748d0,PodSandboxId:20590504d21e2d991d64575fb664bf4b68e9f6a55b70ca5e289d9f9be14f9e5b,Metadata:&ContainerMetadata{Name:storage-p
rovisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713787781962670616,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fee0807-d5b5-4c11-97f8-ea09266f338e,},Annotations:map[string]string{io.kubernetes.container.hash: 2d82f6e7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8999917b-3ed1-4fa0-a6ec-6203613e079f name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:10:25 kubernetes-upgrade-643419 crio[3041]: time="2024-04-22 12:10:25.704336802Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5511ef4b-980c-432a-9e95-bfa975e07e50 name=/runtime.v1.RuntimeService/Version
	Apr 22 12:10:25 kubernetes-upgrade-643419 crio[3041]: time="2024-04-22 12:10:25.704436894Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5511ef4b-980c-432a-9e95-bfa975e07e50 name=/runtime.v1.RuntimeService/Version
	Apr 22 12:10:25 kubernetes-upgrade-643419 crio[3041]: time="2024-04-22 12:10:25.707406928Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b0ce9536-e837-4e15-98b7-79beab2a7dce name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 12:10:25 kubernetes-upgrade-643419 crio[3041]: time="2024-04-22 12:10:25.707874237Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713787825707845054,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b0ce9536-e837-4e15-98b7-79beab2a7dce name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 12:10:25 kubernetes-upgrade-643419 crio[3041]: time="2024-04-22 12:10:25.708763873Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=145b70fb-836a-468d-8d22-7d48ec657073 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:10:25 kubernetes-upgrade-643419 crio[3041]: time="2024-04-22 12:10:25.708853097Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=145b70fb-836a-468d-8d22-7d48ec657073 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:10:25 kubernetes-upgrade-643419 crio[3041]: time="2024-04-22 12:10:25.709355477Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f1b186e3bf179cc491a809e94d8bf7d760ade9f12d0a0764b705a1b1d265f914,PodSandboxId:eb187e3a6d01e55cf13d625124aefd81a3399424aa2c5b08186fe3678bc03cfe,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713787822451364789,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g2xxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ba95ac-a190-4bff-baa4-b80f7d3b92cf,},Annotations:map[string]string{io.kubernetes.container.hash: 56ed4bdc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d537b1139a553248a65cbe9c66c39e131dc7265fa3580a28cbebe03c6cfcbcf,PodSandboxId:e0946f992a35dcc984f4f151dc36f6d2eddc6546c4c0176dae3be74224cd3842,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713787822318729871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kmhk7,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 89cdefbd-b6d9-426c-955b-e88bb53d21df,},Annotations:map[string]string{io.kubernetes.container.hash: 93f9f0ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbeacaa374f4513fde93af8b82df22917f79ee35a7a8e588fc672ed11c44f470,PodSandboxId:f2ed17f216088e103ca1e3a2499b290300b5351330ea69a2c397c89cdfea9591,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAIN
ER_RUNNING,CreatedAt:1713787821727408148,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4vc65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b87fc7d8-46a8-464b-91f0-7815380bd783,},Annotations:map[string]string{io.kubernetes.container.hash: 3762a847,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f8e321b7bc08d667e1c232fc291278b95572d7308ed527ee58ab44d99ef2ad1,PodSandboxId:11267ecf1fcb43094b106dafa29b688dc82dda5ad1eae78bdf643afee3e5117c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171
3787821633577013,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fee0807-d5b5-4c11-97f8-ea09266f338e,},Annotations:map[string]string{io.kubernetes.container.hash: 2d82f6e7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5972e15b0607c17f9650d8e635c75fd8a897da7075d63dc76660f31cd7f253e,PodSandboxId:60f25c26fe361f6562403cc7f857e8a9d2b8960f2a278735eb7279d40997985c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:17137878169
05006533,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-643419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81c3de6290d40353d31a1ac9b781546a,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1873ea660182ff6abaf59bc9c0a7404c98e9e951fe03bba2c41c1e05facde4c,PodSandboxId:bb3025cc2cf6636e82d333e5d5a7f14ce9c176346b557b2d114f03d5a883b08a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:17137878
16802676890,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-643419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5cec128541cb3fa4a85efaf12c82af8,},Annotations:map[string]string{io.kubernetes.container.hash: de56998c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3f3db9ba1b5f9f3a1cfa0ef75e56b9a540ed0c68b8320c429b31756833a1240,PodSandboxId:a2d76941ec81b8a8b08f5ede4c4c2a0bb17a4682af14856d2e3d28cdf3d9b520,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713787816857413824,Labels:map[st
ring]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-643419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cbe4c124e8503fc35b91b7792c26a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3b9530dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c99f25ab8b2799beb531a653b1749a4100a360c87dbe6531af8209bb25ccc00,PodSandboxId:84bd442545a8890d7503b2044b4d2a6f0691e1b1c1b999688fd5949d0862cd0e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713787816726758616,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-643419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81180daaecda106ac085e185b6db7e93,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606571bae1abaef80d934f7dfb0700c226fda12eac40af2c409a97767d3cf0f7,PodSandboxId:b1466cf427b8b1942dfc0a5b5c3b7ae74199514d7d18086f9b144dd072e80aee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713787800995965443,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g2xxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ba95ac-a190-4bff-baa4-b80f7d3b92cf,},Annotations:map[string]string{io.kubernetes.container.hash: 56ed4bdc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e86dacd2ecd16c65d9ca463f87ba8e3fec3cf1fe1a35a1ed3d3daa72c548fb6,PodSandboxId:70e50e08351eb6428a4a67881d57420966bb18d28472167a7e7713faeb276202,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713787800703937825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kmhk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89cdefbd-b6d9-426c-955b-e88bb53d21df,},Annotations:map[string]string{io.kubernetes.container.hash: 93f9f0ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3781f4b80f2f57a0511c41942497af23a9f54af621751c0ddd0bab959d66e588,PodSandboxId:1d9a40fef3fa838e6497307869d6a4d3b938ba6666710a
adec8056b978cb081c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713787799959918982,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4vc65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b87fc7d8-46a8-464b-91f0-7815380bd783,},Annotations:map[string]string{io.kubernetes.container.hash: 3762a847,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71f3f9bb401787e9d943320b7cc41a90f771067b8d796d52452aadf2edf81e4,PodSandboxId:0de2198e4d5c9f9985d27e64137eeacb5efebc9ed4a34c9106d1fa16a4288b8b,Metadata:&Conta
inerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713787799936806699,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-643419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81c3de6290d40353d31a1ac9b781546a,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66582e7e0490e4b3bac9df39abb41db81bc6fd008efbdcfd23049bc6901dab6f,PodSandboxId:e16bbafb8c849ccf20703be74feea8571773e72852c3dcce86a8305d2eb
18e1d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713787799870814936,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-643419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81180daaecda106ac085e185b6db7e93,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f672efd9ce3d4325971f42182486a362e48b2b644c4dbacb7b366af023345af5,PodSandboxId:870c76ba20b7caefda7869d581ac90583944c69ed2690388b313d22ff73c46d3,
Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713787799827922335,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-643419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5cec128541cb3fa4a85efaf12c82af8,},Annotations:map[string]string{io.kubernetes.container.hash: de56998c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fc6eddb9dd26c77399c0a6a306b7d1be56f2379ec8aa8a06781de9a1ee55fc5,PodSandboxId:3961f4d99f12e7a16eb8f08076e569284bb13c55fe8f05df944a338c795cfe0d,Metadata:&ContainerMetadata{Name:kub
e-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713787799771670089,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-643419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cbe4c124e8503fc35b91b7792c26a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3b9530dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b05948da7e06c4b1bc3819ab8f768db5581b33b0d8d656e97cdf114b0748d0,PodSandboxId:20590504d21e2d991d64575fb664bf4b68e9f6a55b70ca5e289d9f9be14f9e5b,Metadata:&ContainerMetadata{Name:storage-p
rovisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713787781962670616,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fee0807-d5b5-4c11-97f8-ea09266f338e,},Annotations:map[string]string{io.kubernetes.container.hash: 2d82f6e7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=145b70fb-836a-468d-8d22-7d48ec657073 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:10:25 kubernetes-upgrade-643419 crio[3041]: time="2024-04-22 12:10:25.769371451Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b616dcdd-ebbe-4fc7-87f4-ddbb0f66bd8a name=/runtime.v1.RuntimeService/Version
	Apr 22 12:10:25 kubernetes-upgrade-643419 crio[3041]: time="2024-04-22 12:10:25.769475133Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b616dcdd-ebbe-4fc7-87f4-ddbb0f66bd8a name=/runtime.v1.RuntimeService/Version
	Apr 22 12:10:25 kubernetes-upgrade-643419 crio[3041]: time="2024-04-22 12:10:25.771998079Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=90c9a3d6-bac3-43a9-82ee-9ed4c8d1e96b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 12:10:25 kubernetes-upgrade-643419 crio[3041]: time="2024-04-22 12:10:25.772672692Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713787825772632517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=90c9a3d6-bac3-43a9-82ee-9ed4c8d1e96b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 12:10:25 kubernetes-upgrade-643419 crio[3041]: time="2024-04-22 12:10:25.773789532Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a09d4dd-76c1-4b30-9920-7d3d872e54a2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:10:25 kubernetes-upgrade-643419 crio[3041]: time="2024-04-22 12:10:25.773867186Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a09d4dd-76c1-4b30-9920-7d3d872e54a2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:10:25 kubernetes-upgrade-643419 crio[3041]: time="2024-04-22 12:10:25.774363860Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f1b186e3bf179cc491a809e94d8bf7d760ade9f12d0a0764b705a1b1d265f914,PodSandboxId:eb187e3a6d01e55cf13d625124aefd81a3399424aa2c5b08186fe3678bc03cfe,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713787822451364789,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g2xxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ba95ac-a190-4bff-baa4-b80f7d3b92cf,},Annotations:map[string]string{io.kubernetes.container.hash: 56ed4bdc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d537b1139a553248a65cbe9c66c39e131dc7265fa3580a28cbebe03c6cfcbcf,PodSandboxId:e0946f992a35dcc984f4f151dc36f6d2eddc6546c4c0176dae3be74224cd3842,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713787822318729871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kmhk7,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 89cdefbd-b6d9-426c-955b-e88bb53d21df,},Annotations:map[string]string{io.kubernetes.container.hash: 93f9f0ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbeacaa374f4513fde93af8b82df22917f79ee35a7a8e588fc672ed11c44f470,PodSandboxId:f2ed17f216088e103ca1e3a2499b290300b5351330ea69a2c397c89cdfea9591,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAIN
ER_RUNNING,CreatedAt:1713787821727408148,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4vc65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b87fc7d8-46a8-464b-91f0-7815380bd783,},Annotations:map[string]string{io.kubernetes.container.hash: 3762a847,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f8e321b7bc08d667e1c232fc291278b95572d7308ed527ee58ab44d99ef2ad1,PodSandboxId:11267ecf1fcb43094b106dafa29b688dc82dda5ad1eae78bdf643afee3e5117c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171
3787821633577013,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fee0807-d5b5-4c11-97f8-ea09266f338e,},Annotations:map[string]string{io.kubernetes.container.hash: 2d82f6e7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5972e15b0607c17f9650d8e635c75fd8a897da7075d63dc76660f31cd7f253e,PodSandboxId:60f25c26fe361f6562403cc7f857e8a9d2b8960f2a278735eb7279d40997985c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:17137878169
05006533,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-643419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81c3de6290d40353d31a1ac9b781546a,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1873ea660182ff6abaf59bc9c0a7404c98e9e951fe03bba2c41c1e05facde4c,PodSandboxId:bb3025cc2cf6636e82d333e5d5a7f14ce9c176346b557b2d114f03d5a883b08a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:17137878
16802676890,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-643419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5cec128541cb3fa4a85efaf12c82af8,},Annotations:map[string]string{io.kubernetes.container.hash: de56998c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3f3db9ba1b5f9f3a1cfa0ef75e56b9a540ed0c68b8320c429b31756833a1240,PodSandboxId:a2d76941ec81b8a8b08f5ede4c4c2a0bb17a4682af14856d2e3d28cdf3d9b520,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713787816857413824,Labels:map[st
ring]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-643419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cbe4c124e8503fc35b91b7792c26a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3b9530dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c99f25ab8b2799beb531a653b1749a4100a360c87dbe6531af8209bb25ccc00,PodSandboxId:84bd442545a8890d7503b2044b4d2a6f0691e1b1c1b999688fd5949d0862cd0e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713787816726758616,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-643419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81180daaecda106ac085e185b6db7e93,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606571bae1abaef80d934f7dfb0700c226fda12eac40af2c409a97767d3cf0f7,PodSandboxId:b1466cf427b8b1942dfc0a5b5c3b7ae74199514d7d18086f9b144dd072e80aee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713787800995965443,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g2xxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ba95ac-a190-4bff-baa4-b80f7d3b92cf,},Annotations:map[string]string{io.kubernetes.container.hash: 56ed4bdc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e86dacd2ecd16c65d9ca463f87ba8e3fec3cf1fe1a35a1ed3d3daa72c548fb6,PodSandboxId:70e50e08351eb6428a4a67881d57420966bb18d28472167a7e7713faeb276202,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713787800703937825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kmhk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89cdefbd-b6d9-426c-955b-e88bb53d21df,},Annotations:map[string]string{io.kubernetes.container.hash: 93f9f0ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3781f4b80f2f57a0511c41942497af23a9f54af621751c0ddd0bab959d66e588,PodSandboxId:1d9a40fef3fa838e6497307869d6a4d3b938ba6666710a
adec8056b978cb081c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713787799959918982,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4vc65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b87fc7d8-46a8-464b-91f0-7815380bd783,},Annotations:map[string]string{io.kubernetes.container.hash: 3762a847,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71f3f9bb401787e9d943320b7cc41a90f771067b8d796d52452aadf2edf81e4,PodSandboxId:0de2198e4d5c9f9985d27e64137eeacb5efebc9ed4a34c9106d1fa16a4288b8b,Metadata:&Conta
inerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713787799936806699,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-643419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81c3de6290d40353d31a1ac9b781546a,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66582e7e0490e4b3bac9df39abb41db81bc6fd008efbdcfd23049bc6901dab6f,PodSandboxId:e16bbafb8c849ccf20703be74feea8571773e72852c3dcce86a8305d2eb
18e1d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713787799870814936,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-643419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81180daaecda106ac085e185b6db7e93,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f672efd9ce3d4325971f42182486a362e48b2b644c4dbacb7b366af023345af5,PodSandboxId:870c76ba20b7caefda7869d581ac90583944c69ed2690388b313d22ff73c46d3,
Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713787799827922335,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-643419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5cec128541cb3fa4a85efaf12c82af8,},Annotations:map[string]string{io.kubernetes.container.hash: de56998c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fc6eddb9dd26c77399c0a6a306b7d1be56f2379ec8aa8a06781de9a1ee55fc5,PodSandboxId:3961f4d99f12e7a16eb8f08076e569284bb13c55fe8f05df944a338c795cfe0d,Metadata:&ContainerMetadata{Name:kub
e-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713787799771670089,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-643419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cbe4c124e8503fc35b91b7792c26a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3b9530dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b05948da7e06c4b1bc3819ab8f768db5581b33b0d8d656e97cdf114b0748d0,PodSandboxId:20590504d21e2d991d64575fb664bf4b68e9f6a55b70ca5e289d9f9be14f9e5b,Metadata:&ContainerMetadata{Name:storage-p
rovisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713787781962670616,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fee0807-d5b5-4c11-97f8-ea09266f338e,},Annotations:map[string]string{io.kubernetes.container.hash: 2d82f6e7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9a09d4dd-76c1-4b30-9920-7d3d872e54a2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:10:25 kubernetes-upgrade-643419 crio[3041]: time="2024-04-22 12:10:25.827681468Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bdd65c63-685d-4537-bc0e-da124765f212 name=/runtime.v1.RuntimeService/Version
	Apr 22 12:10:25 kubernetes-upgrade-643419 crio[3041]: time="2024-04-22 12:10:25.827787323Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bdd65c63-685d-4537-bc0e-da124765f212 name=/runtime.v1.RuntimeService/Version
	Apr 22 12:10:25 kubernetes-upgrade-643419 crio[3041]: time="2024-04-22 12:10:25.830508911Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=49300518-0035-4c49-b3e6-1b2aee11a1e4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 12:10:25 kubernetes-upgrade-643419 crio[3041]: time="2024-04-22 12:10:25.831619289Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713787825831537805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=49300518-0035-4c49-b3e6-1b2aee11a1e4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 12:10:25 kubernetes-upgrade-643419 crio[3041]: time="2024-04-22 12:10:25.832713629Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7f7c9d96-907e-4f27-b1da-5e766ae8f2e3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:10:25 kubernetes-upgrade-643419 crio[3041]: time="2024-04-22 12:10:25.832871107Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7f7c9d96-907e-4f27-b1da-5e766ae8f2e3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:10:25 kubernetes-upgrade-643419 crio[3041]: time="2024-04-22 12:10:25.833676329Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f1b186e3bf179cc491a809e94d8bf7d760ade9f12d0a0764b705a1b1d265f914,PodSandboxId:eb187e3a6d01e55cf13d625124aefd81a3399424aa2c5b08186fe3678bc03cfe,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713787822451364789,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g2xxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ba95ac-a190-4bff-baa4-b80f7d3b92cf,},Annotations:map[string]string{io.kubernetes.container.hash: 56ed4bdc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d537b1139a553248a65cbe9c66c39e131dc7265fa3580a28cbebe03c6cfcbcf,PodSandboxId:e0946f992a35dcc984f4f151dc36f6d2eddc6546c4c0176dae3be74224cd3842,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713787822318729871,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kmhk7,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 89cdefbd-b6d9-426c-955b-e88bb53d21df,},Annotations:map[string]string{io.kubernetes.container.hash: 93f9f0ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbeacaa374f4513fde93af8b82df22917f79ee35a7a8e588fc672ed11c44f470,PodSandboxId:f2ed17f216088e103ca1e3a2499b290300b5351330ea69a2c397c89cdfea9591,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAIN
ER_RUNNING,CreatedAt:1713787821727408148,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4vc65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b87fc7d8-46a8-464b-91f0-7815380bd783,},Annotations:map[string]string{io.kubernetes.container.hash: 3762a847,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f8e321b7bc08d667e1c232fc291278b95572d7308ed527ee58ab44d99ef2ad1,PodSandboxId:11267ecf1fcb43094b106dafa29b688dc82dda5ad1eae78bdf643afee3e5117c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171
3787821633577013,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fee0807-d5b5-4c11-97f8-ea09266f338e,},Annotations:map[string]string{io.kubernetes.container.hash: 2d82f6e7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5972e15b0607c17f9650d8e635c75fd8a897da7075d63dc76660f31cd7f253e,PodSandboxId:60f25c26fe361f6562403cc7f857e8a9d2b8960f2a278735eb7279d40997985c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:17137878169
05006533,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-643419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81c3de6290d40353d31a1ac9b781546a,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1873ea660182ff6abaf59bc9c0a7404c98e9e951fe03bba2c41c1e05facde4c,PodSandboxId:bb3025cc2cf6636e82d333e5d5a7f14ce9c176346b557b2d114f03d5a883b08a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:17137878
16802676890,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-643419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5cec128541cb3fa4a85efaf12c82af8,},Annotations:map[string]string{io.kubernetes.container.hash: de56998c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3f3db9ba1b5f9f3a1cfa0ef75e56b9a540ed0c68b8320c429b31756833a1240,PodSandboxId:a2d76941ec81b8a8b08f5ede4c4c2a0bb17a4682af14856d2e3d28cdf3d9b520,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713787816857413824,Labels:map[st
ring]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-643419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cbe4c124e8503fc35b91b7792c26a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3b9530dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c99f25ab8b2799beb531a653b1749a4100a360c87dbe6531af8209bb25ccc00,PodSandboxId:84bd442545a8890d7503b2044b4d2a6f0691e1b1c1b999688fd5949d0862cd0e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713787816726758616,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-643419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81180daaecda106ac085e185b6db7e93,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:606571bae1abaef80d934f7dfb0700c226fda12eac40af2c409a97767d3cf0f7,PodSandboxId:b1466cf427b8b1942dfc0a5b5c3b7ae74199514d7d18086f9b144dd072e80aee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713787800995965443,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g2xxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ba95ac-a190-4bff-baa4-b80f7d3b92cf,},Annotations:map[string]string{io.kubernetes.container.hash: 56ed4bdc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e86dacd2ecd16c65d9ca463f87ba8e3fec3cf1fe1a35a1ed3d3daa72c548fb6,PodSandboxId:70e50e08351eb6428a4a67881d57420966bb18d28472167a7e7713faeb276202,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713787800703937825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kmhk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89cdefbd-b6d9-426c-955b-e88bb53d21df,},Annotations:map[string]string{io.kubernetes.container.hash: 93f9f0ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3781f4b80f2f57a0511c41942497af23a9f54af621751c0ddd0bab959d66e588,PodSandboxId:1d9a40fef3fa838e6497307869d6a4d3b938ba6666710a
adec8056b978cb081c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713787799959918982,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4vc65,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b87fc7d8-46a8-464b-91f0-7815380bd783,},Annotations:map[string]string{io.kubernetes.container.hash: 3762a847,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71f3f9bb401787e9d943320b7cc41a90f771067b8d796d52452aadf2edf81e4,PodSandboxId:0de2198e4d5c9f9985d27e64137eeacb5efebc9ed4a34c9106d1fa16a4288b8b,Metadata:&Conta
inerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713787799936806699,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-643419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81c3de6290d40353d31a1ac9b781546a,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66582e7e0490e4b3bac9df39abb41db81bc6fd008efbdcfd23049bc6901dab6f,PodSandboxId:e16bbafb8c849ccf20703be74feea8571773e72852c3dcce86a8305d2eb
18e1d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713787799870814936,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-643419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81180daaecda106ac085e185b6db7e93,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f672efd9ce3d4325971f42182486a362e48b2b644c4dbacb7b366af023345af5,PodSandboxId:870c76ba20b7caefda7869d581ac90583944c69ed2690388b313d22ff73c46d3,
Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713787799827922335,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-643419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5cec128541cb3fa4a85efaf12c82af8,},Annotations:map[string]string{io.kubernetes.container.hash: de56998c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fc6eddb9dd26c77399c0a6a306b7d1be56f2379ec8aa8a06781de9a1ee55fc5,PodSandboxId:3961f4d99f12e7a16eb8f08076e569284bb13c55fe8f05df944a338c795cfe0d,Metadata:&ContainerMetadata{Name:kub
e-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713787799771670089,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-643419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cbe4c124e8503fc35b91b7792c26a06,},Annotations:map[string]string{io.kubernetes.container.hash: 3b9530dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52b05948da7e06c4b1bc3819ab8f768db5581b33b0d8d656e97cdf114b0748d0,PodSandboxId:20590504d21e2d991d64575fb664bf4b68e9f6a55b70ca5e289d9f9be14f9e5b,Metadata:&ContainerMetadata{Name:storage-p
rovisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713787781962670616,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fee0807-d5b5-4c11-97f8-ea09266f338e,},Annotations:map[string]string{io.kubernetes.container.hash: 2d82f6e7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7f7c9d96-907e-4f27-b1da-5e766ae8f2e3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f1b186e3bf179       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   eb187e3a6d01e       coredns-7db6d8ff4d-g2xxg
	6d537b1139a55       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   e0946f992a35d       coredns-7db6d8ff4d-kmhk7
	bbeacaa374f45       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   4 seconds ago       Running             kube-proxy                2                   f2ed17f216088       kube-proxy-4vc65
	7f8e321b7bc08       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   4 seconds ago       Running             storage-provisioner       2                   11267ecf1fcb4       storage-provisioner
	c5972e15b0607       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   9 seconds ago       Running             kube-controller-manager   2                   60f25c26fe361       kube-controller-manager-kubernetes-upgrade-643419
	f3f3db9ba1b5f       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   9 seconds ago       Running             kube-apiserver            2                   a2d76941ec81b       kube-apiserver-kubernetes-upgrade-643419
	f1873ea660182       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 seconds ago       Running             etcd                      2                   bb3025cc2cf66       etcd-kubernetes-upgrade-643419
	3c99f25ab8b27       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   9 seconds ago       Running             kube-scheduler            2                   84bd442545a88       kube-scheduler-kubernetes-upgrade-643419
	606571bae1aba       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   24 seconds ago      Exited              coredns                   1                   b1466cf427b8b       coredns-7db6d8ff4d-g2xxg
	8e86dacd2ecd1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   25 seconds ago      Exited              coredns                   1                   70e50e08351eb       coredns-7db6d8ff4d-kmhk7
	3781f4b80f2f5       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   25 seconds ago      Exited              kube-proxy                1                   1d9a40fef3fa8       kube-proxy-4vc65
	d71f3f9bb4017       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   26 seconds ago      Exited              kube-controller-manager   1                   0de2198e4d5c9       kube-controller-manager-kubernetes-upgrade-643419
	66582e7e0490e       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   26 seconds ago      Exited              kube-scheduler            1                   e16bbafb8c849       kube-scheduler-kubernetes-upgrade-643419
	f672efd9ce3d4       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   26 seconds ago      Exited              etcd                      1                   870c76ba20b7c       etcd-kubernetes-upgrade-643419
	2fc6eddb9dd26       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   26 seconds ago      Exited              kube-apiserver            1                   3961f4d99f12e       kube-apiserver-kubernetes-upgrade-643419
	52b05948da7e0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   43 seconds ago      Exited              storage-provisioner       1                   20590504d21e2       storage-provisioner
	
	
	==> coredns [606571bae1abaef80d934f7dfb0700c226fda12eac40af2c409a97767d3cf0f7] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6d537b1139a553248a65cbe9c66c39e131dc7265fa3580a28cbebe03c6cfcbcf] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [8e86dacd2ecd16c65d9ca463f87ba8e3fec3cf1fe1a35a1ed3d3daa72c548fb6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f1b186e3bf179cc491a809e94d8bf7d760ade9f12d0a0764b705a1b1d265f914] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-643419
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-643419
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 12:08:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-643419
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 12:10:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 12:10:20 +0000   Mon, 22 Apr 2024 12:08:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 12:10:20 +0000   Mon, 22 Apr 2024 12:08:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 12:10:20 +0000   Mon, 22 Apr 2024 12:08:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 12:10:20 +0000   Mon, 22 Apr 2024 12:08:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.54
	  Hostname:    kubernetes-upgrade-643419
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0a28952b68a5456eb699e56a1ef00fac
	  System UUID:                0a28952b-68a5-456e-b699-e56a1ef00fac
	  Boot ID:                    08f17d84-a8ef-4431-9a8c-730e7fc5c5fc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-g2xxg                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     75s
	  kube-system                 coredns-7db6d8ff4d-kmhk7                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     75s
	  kube-system                 etcd-kubernetes-upgrade-643419                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         83s
	  kube-system                 kube-apiserver-kubernetes-upgrade-643419             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-643419    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 kube-proxy-4vc65                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-scheduler-kubernetes-upgrade-643419             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 74s                kube-proxy       
	  Normal  Starting                 4s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  97s (x8 over 97s)  kubelet          Node kubernetes-upgrade-643419 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    97s (x8 over 97s)  kubelet          Node kubernetes-upgrade-643419 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     97s (x7 over 97s)  kubelet          Node kubernetes-upgrade-643419 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  97s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           76s                node-controller  Node kubernetes-upgrade-643419 event: Registered Node kubernetes-upgrade-643419 in Controller
	  Normal  Starting                 10s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10s (x8 over 10s)  kubelet          Node kubernetes-upgrade-643419 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s (x8 over 10s)  kubelet          Node kubernetes-upgrade-643419 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s (x7 over 10s)  kubelet          Node kubernetes-upgrade-643419 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10s                kubelet          Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.487091] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.082127] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.093630] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.233959] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.233569] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.456604] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +6.865260] systemd-fstab-generator[735]: Ignoring "noauto" option for root device
	[  +0.097813] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.646320] systemd-fstab-generator[864]: Ignoring "noauto" option for root device
	[  +0.323717] hrtimer: interrupt took 3593443 ns
	[Apr22 12:09] kauditd_printk_skb: 97 callbacks suppressed
	[  +0.095941] systemd-fstab-generator[1258]: Ignoring "noauto" option for root device
	[  +9.786189] kauditd_printk_skb: 15 callbacks suppressed
	[ +30.777099] kauditd_printk_skb: 76 callbacks suppressed
	[Apr22 12:10] systemd-fstab-generator[2845]: Ignoring "noauto" option for root device
	[  +0.427858] systemd-fstab-generator[2928]: Ignoring "noauto" option for root device
	[  +0.415269] systemd-fstab-generator[2972]: Ignoring "noauto" option for root device
	[  +0.270169] systemd-fstab-generator[2991]: Ignoring "noauto" option for root device
	[  +0.508624] systemd-fstab-generator[3028]: Ignoring "noauto" option for root device
	[ +10.823681] systemd-fstab-generator[3356]: Ignoring "noauto" option for root device
	[  +0.095583] kauditd_printk_skb: 202 callbacks suppressed
	[  +2.305200] systemd-fstab-generator[3481]: Ignoring "noauto" option for root device
	[  +5.625105] kauditd_printk_skb: 88 callbacks suppressed
	[  +1.902425] systemd-fstab-generator[4497]: Ignoring "noauto" option for root device
	
	
	==> etcd [f1873ea660182ff6abaf59bc9c0a7404c98e9e951fe03bba2c41c1e05facde4c] <==
	{"level":"info","ts":"2024-04-22T12:10:17.323458Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b0a6bbe4c9ddfbc1","initial-advertise-peer-urls":["https://192.168.50.54:2380"],"listen-peer-urls":["https://192.168.50.54:2380"],"advertise-client-urls":["https://192.168.50.54:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.54:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-22T12:10:17.323516Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-22T12:10:17.323687Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 switched to configuration voters=(12729067988122991553)"}
	{"level":"info","ts":"2024-04-22T12:10:17.323762Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b7dc4198fc8444d0","local-member-id":"b0a6bbe4c9ddfbc1","added-peer-id":"b0a6bbe4c9ddfbc1","added-peer-peer-urls":["https://192.168.50.54:2380"]}
	{"level":"info","ts":"2024-04-22T12:10:17.324527Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b7dc4198fc8444d0","local-member-id":"b0a6bbe4c9ddfbc1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T12:10:17.324992Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T12:10:17.329083Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.54:2380"}
	{"level":"info","ts":"2024-04-22T12:10:17.329135Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.54:2380"}
	{"level":"info","ts":"2024-04-22T12:10:17.335991Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-22T12:10:17.336058Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-22T12:10:17.336087Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-22T12:10:19.17665Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 is starting a new election at term 3"}
	{"level":"info","ts":"2024-04-22T12:10:19.176781Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-04-22T12:10:19.176831Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 received MsgPreVoteResp from b0a6bbe4c9ddfbc1 at term 3"}
	{"level":"info","ts":"2024-04-22T12:10:19.176868Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 became candidate at term 4"}
	{"level":"info","ts":"2024-04-22T12:10:19.176892Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 received MsgVoteResp from b0a6bbe4c9ddfbc1 at term 4"}
	{"level":"info","ts":"2024-04-22T12:10:19.176919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 became leader at term 4"}
	{"level":"info","ts":"2024-04-22T12:10:19.176944Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b0a6bbe4c9ddfbc1 elected leader b0a6bbe4c9ddfbc1 at term 4"}
	{"level":"info","ts":"2024-04-22T12:10:19.183852Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T12:10:19.184288Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T12:10:19.183854Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"b0a6bbe4c9ddfbc1","local-member-attributes":"{Name:kubernetes-upgrade-643419 ClientURLs:[https://192.168.50.54:2379]}","request-path":"/0/members/b0a6bbe4c9ddfbc1/attributes","cluster-id":"b7dc4198fc8444d0","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-22T12:10:19.184624Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-22T12:10:19.184701Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-22T12:10:19.186114Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-22T12:10:19.18646Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.54:2379"}
	
	
	==> etcd [f672efd9ce3d4325971f42182486a362e48b2b644c4dbacb7b366af023345af5] <==
	{"level":"info","ts":"2024-04-22T12:10:02.263493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-22T12:10:02.263659Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-22T12:10:02.263834Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 received MsgPreVoteResp from b0a6bbe4c9ddfbc1 at term 2"}
	{"level":"info","ts":"2024-04-22T12:10:02.263875Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 became candidate at term 3"}
	{"level":"info","ts":"2024-04-22T12:10:02.2639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 received MsgVoteResp from b0a6bbe4c9ddfbc1 at term 3"}
	{"level":"info","ts":"2024-04-22T12:10:02.263926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 became leader at term 3"}
	{"level":"info","ts":"2024-04-22T12:10:02.263952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b0a6bbe4c9ddfbc1 elected leader b0a6bbe4c9ddfbc1 at term 3"}
	{"level":"info","ts":"2024-04-22T12:10:02.2661Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"b0a6bbe4c9ddfbc1","local-member-attributes":"{Name:kubernetes-upgrade-643419 ClientURLs:[https://192.168.50.54:2379]}","request-path":"/0/members/b0a6bbe4c9ddfbc1/attributes","cluster-id":"b7dc4198fc8444d0","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-22T12:10:02.268594Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T12:10:02.269202Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-22T12:10:02.271501Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-22T12:10:02.268543Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T12:10:02.279564Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.54:2379"}
	{"level":"info","ts":"2024-04-22T12:10:02.33242Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-22T12:10:02.754529Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-22T12:10:02.754621Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"kubernetes-upgrade-643419","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.54:2380"],"advertise-client-urls":["https://192.168.50.54:2379"]}
	{"level":"warn","ts":"2024-04-22T12:10:02.754725Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T12:10:02.754799Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T12:10:02.780181Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:51376","server-name":"","error":"write tcp 127.0.0.1:2379->127.0.0.1:51376: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T12:10:02.795313Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.54:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T12:10:02.795425Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.54:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-22T12:10:02.795502Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b0a6bbe4c9ddfbc1","current-leader-member-id":"b0a6bbe4c9ddfbc1"}
	{"level":"info","ts":"2024-04-22T12:10:02.803469Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.50.54:2380"}
	{"level":"info","ts":"2024-04-22T12:10:02.803658Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.50.54:2380"}
	{"level":"info","ts":"2024-04-22T12:10:02.803721Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"kubernetes-upgrade-643419","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.54:2380"],"advertise-client-urls":["https://192.168.50.54:2379"]}
	
	
	==> kernel <==
	 12:10:26 up 2 min,  0 users,  load average: 2.48, 0.82, 0.29
	Linux kubernetes-upgrade-643419 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2fc6eddb9dd26c77399c0a6a306b7d1be56f2379ec8aa8a06781de9a1ee55fc5] <==
	I0422 12:10:00.492944       1 options.go:221] external host was not specified, using 192.168.50.54
	I0422 12:10:00.496400       1 server.go:148] Version: v1.30.0
	I0422 12:10:00.496440       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 12:10:01.963042       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0422 12:10:01.995155       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0422 12:10:02.002799       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0422 12:10:02.002880       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0422 12:10:02.003084       1 instance.go:299] Using reconciler: lease
	I0422 12:10:02.680188       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	W0422 12:10:02.680447       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	W0422 12:10:02.762615       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: EOF"
	E0422 12:10:02.770892       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	
	
	==> kube-apiserver [f3f3db9ba1b5f9f3a1cfa0ef75e56b9a540ed0c68b8320c429b31756833a1240] <==
	I0422 12:10:20.595415       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0422 12:10:20.595494       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0422 12:10:20.595526       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0422 12:10:20.598030       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0422 12:10:20.598151       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0422 12:10:20.602821       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0422 12:10:20.607106       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0422 12:10:20.607228       1 shared_informer.go:320] Caches are synced for configmaps
	I0422 12:10:20.608487       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0422 12:10:20.609682       1 aggregator.go:165] initial CRD sync complete...
	I0422 12:10:20.609745       1 autoregister_controller.go:141] Starting autoregister controller
	I0422 12:10:20.609769       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0422 12:10:20.609792       1 cache.go:39] Caches are synced for autoregister controller
	I0422 12:10:20.674759       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0422 12:10:20.679150       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0422 12:10:20.679194       1 policy_source.go:224] refreshing policies
	I0422 12:10:20.703228       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0422 12:10:20.726855       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0422 12:10:21.523554       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0422 12:10:22.994023       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0422 12:10:23.018026       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0422 12:10:23.063707       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0422 12:10:23.152322       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0422 12:10:23.164602       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0422 12:10:24.441340       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [c5972e15b0607c17f9650d8e635c75fd8a897da7075d63dc76660f31cd7f253e] <==
	I0422 12:10:22.975460       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0422 12:10:22.975552       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0422 12:10:22.975611       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0422 12:10:22.975767       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0422 12:10:22.976076       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0422 12:10:22.976383       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0422 12:10:22.976728       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0422 12:10:22.976783       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0422 12:10:22.976831       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0422 12:10:22.976876       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0422 12:10:22.976933       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0422 12:10:22.976970       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0422 12:10:22.977039       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0422 12:10:22.977117       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0422 12:10:22.977286       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0422 12:10:22.977585       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0422 12:10:22.977634       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0422 12:10:22.982204       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0422 12:10:22.982537       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0422 12:10:22.982572       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0422 12:10:22.987947       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0422 12:10:22.988350       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0422 12:10:22.988397       1 shared_informer.go:313] Waiting for caches to sync for job
	I0422 12:10:22.992739       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0422 12:10:22.993040       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	
	
	==> kube-controller-manager [d71f3f9bb401787e9d943320b7cc41a90f771067b8d796d52452aadf2edf81e4] <==
	
	
	==> kube-proxy [3781f4b80f2f57a0511c41942497af23a9f54af621751c0ddd0bab959d66e588] <==
	
	
	==> kube-proxy [bbeacaa374f4513fde93af8b82df22917f79ee35a7a8e588fc672ed11c44f470] <==
	I0422 12:10:22.128675       1 server_linux.go:69] "Using iptables proxy"
	I0422 12:10:22.165808       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.54"]
	I0422 12:10:22.360058       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 12:10:22.360158       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 12:10:22.360198       1 server_linux.go:165] "Using iptables Proxier"
	I0422 12:10:22.378492       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 12:10:22.378769       1 server.go:872] "Version info" version="v1.30.0"
	I0422 12:10:22.378813       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 12:10:22.391197       1 config.go:192] "Starting service config controller"
	I0422 12:10:22.391406       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 12:10:22.391533       1 config.go:101] "Starting endpoint slice config controller"
	I0422 12:10:22.391539       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 12:10:22.394075       1 config.go:319] "Starting node config controller"
	I0422 12:10:22.394323       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 12:10:22.493357       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0422 12:10:22.493501       1 shared_informer.go:320] Caches are synced for service config
	I0422 12:10:22.497211       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3c99f25ab8b2799beb531a653b1749a4100a360c87dbe6531af8209bb25ccc00] <==
	I0422 12:10:18.163962       1 serving.go:380] Generated self-signed cert in-memory
	W0422 12:10:20.565020       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0422 12:10:20.565193       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0422 12:10:20.565211       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0422 12:10:20.565218       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0422 12:10:20.620183       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0422 12:10:20.621158       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 12:10:20.622888       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0422 12:10:20.623010       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0422 12:10:20.627218       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0422 12:10:20.623034       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0422 12:10:20.729055       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [66582e7e0490e4b3bac9df39abb41db81bc6fd008efbdcfd23049bc6901dab6f] <==
	
	
	==> kubelet <==
	Apr 22 12:10:16 kubernetes-upgrade-643419 kubelet[3488]: I0422 12:10:16.755749    3488 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-643419"
	Apr 22 12:10:16 kubernetes-upgrade-643419 kubelet[3488]: E0422 12:10:16.757340    3488 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.54:8443: connect: connection refused" node="kubernetes-upgrade-643419"
	Apr 22 12:10:17 kubernetes-upgrade-643419 kubelet[3488]: W0422 12:10:17.151679    3488 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-643419&limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
	Apr 22 12:10:17 kubernetes-upgrade-643419 kubelet[3488]: E0422 12:10:17.151762    3488 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-643419&limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
	Apr 22 12:10:17 kubernetes-upgrade-643419 kubelet[3488]: W0422 12:10:17.204036    3488 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
	Apr 22 12:10:17 kubernetes-upgrade-643419 kubelet[3488]: E0422 12:10:17.204672    3488 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
	Apr 22 12:10:17 kubernetes-upgrade-643419 kubelet[3488]: W0422 12:10:17.346502    3488 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
	Apr 22 12:10:17 kubernetes-upgrade-643419 kubelet[3488]: E0422 12:10:17.346569    3488 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
	Apr 22 12:10:17 kubernetes-upgrade-643419 kubelet[3488]: I0422 12:10:17.560826    3488 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-643419"
	Apr 22 12:10:20 kubernetes-upgrade-643419 kubelet[3488]: I0422 12:10:20.719309    3488 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-643419"
	Apr 22 12:10:20 kubernetes-upgrade-643419 kubelet[3488]: I0422 12:10:20.719395    3488 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-643419"
	Apr 22 12:10:20 kubernetes-upgrade-643419 kubelet[3488]: I0422 12:10:20.721175    3488 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 22 12:10:20 kubernetes-upgrade-643419 kubelet[3488]: I0422 12:10:20.722361    3488 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 22 12:10:21 kubernetes-upgrade-643419 kubelet[3488]: I0422 12:10:21.015878    3488 apiserver.go:52] "Watching apiserver"
	Apr 22 12:10:21 kubernetes-upgrade-643419 kubelet[3488]: I0422 12:10:21.019536    3488 topology_manager.go:215] "Topology Admit Handler" podUID="5fee0807-d5b5-4c11-97f8-ea09266f338e" podNamespace="kube-system" podName="storage-provisioner"
	Apr 22 12:10:21 kubernetes-upgrade-643419 kubelet[3488]: I0422 12:10:21.019686    3488 topology_manager.go:215] "Topology Admit Handler" podUID="b87fc7d8-46a8-464b-91f0-7815380bd783" podNamespace="kube-system" podName="kube-proxy-4vc65"
	Apr 22 12:10:21 kubernetes-upgrade-643419 kubelet[3488]: I0422 12:10:21.019789    3488 topology_manager.go:215] "Topology Admit Handler" podUID="74ba95ac-a190-4bff-baa4-b80f7d3b92cf" podNamespace="kube-system" podName="coredns-7db6d8ff4d-g2xxg"
	Apr 22 12:10:21 kubernetes-upgrade-643419 kubelet[3488]: I0422 12:10:21.019854    3488 topology_manager.go:215] "Topology Admit Handler" podUID="89cdefbd-b6d9-426c-955b-e88bb53d21df" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kmhk7"
	Apr 22 12:10:21 kubernetes-upgrade-643419 kubelet[3488]: I0422 12:10:21.024630    3488 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 22 12:10:21 kubernetes-upgrade-643419 kubelet[3488]: I0422 12:10:21.025711    3488 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b87fc7d8-46a8-464b-91f0-7815380bd783-lib-modules\") pod \"kube-proxy-4vc65\" (UID: \"b87fc7d8-46a8-464b-91f0-7815380bd783\") " pod="kube-system/kube-proxy-4vc65"
	Apr 22 12:10:21 kubernetes-upgrade-643419 kubelet[3488]: I0422 12:10:21.025748    3488 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5fee0807-d5b5-4c11-97f8-ea09266f338e-tmp\") pod \"storage-provisioner\" (UID: \"5fee0807-d5b5-4c11-97f8-ea09266f338e\") " pod="kube-system/storage-provisioner"
	Apr 22 12:10:21 kubernetes-upgrade-643419 kubelet[3488]: I0422 12:10:21.025775    3488 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b87fc7d8-46a8-464b-91f0-7815380bd783-xtables-lock\") pod \"kube-proxy-4vc65\" (UID: \"b87fc7d8-46a8-464b-91f0-7815380bd783\") " pod="kube-system/kube-proxy-4vc65"
	Apr 22 12:10:24 kubernetes-upgrade-643419 kubelet[3488]: I0422 12:10:24.397323    3488 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 22 12:10:25 kubernetes-upgrade-643419 kubelet[3488]: I0422 12:10:25.986052    3488 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 22 12:10:27 kubernetes-upgrade-643419 kubelet[3488]: I0422 12:10:27.117592    3488 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [52b05948da7e06c4b1bc3819ab8f768db5581b33b0d8d656e97cdf114b0748d0] <==
	I0422 12:09:42.099867       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0422 12:09:42.114946       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0422 12:09:42.115092       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0422 12:09:42.130775       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0422 12:09:42.131222       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-643419_438dc1a4-b7a9-4ada-9d5b-82c78407af93!
	I0422 12:09:42.131579       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"22910aa7-3c46-4326-8b8d-bdc40132032b", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-643419_438dc1a4-b7a9-4ada-9d5b-82c78407af93 became leader
	I0422 12:09:42.231943       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-643419_438dc1a4-b7a9-4ada-9d5b-82c78407af93!
	
	
	==> storage-provisioner [7f8e321b7bc08d667e1c232fc291278b95572d7308ed527ee58ab44d99ef2ad1] <==
	I0422 12:10:21.916194       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0422 12:10:21.976603       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0422 12:10:21.983505       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 12:10:25.132017   66818 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18711-7633/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-643419 -n kubernetes-upgrade-643419
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-643419 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-643419" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-643419
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-643419: (1.358548602s)
--- FAIL: TestKubernetesUpgrade (408.64s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (77.62s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-253908 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-253908 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m12.928958827s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-253908] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18711-7633/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18711-7633/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-253908" primary control-plane node in "pause-253908" cluster
	* Updating the running kvm2 "pause-253908" VM ...
	* Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-253908" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 12:02:22.034754   56551 out.go:291] Setting OutFile to fd 1 ...
	I0422 12:02:22.035378   56551 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 12:02:22.035402   56551 out.go:304] Setting ErrFile to fd 2...
	I0422 12:02:22.035410   56551 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 12:02:22.035873   56551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
	I0422 12:02:22.036741   56551 out.go:298] Setting JSON to false
	I0422 12:02:22.037685   56551 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6285,"bootTime":1713781057,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 12:02:22.037749   56551 start.go:139] virtualization: kvm guest
	I0422 12:02:22.039887   56551 out.go:177] * [pause-253908] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 12:02:22.041937   56551 out.go:177]   - MINIKUBE_LOCATION=18711
	I0422 12:02:22.041942   56551 notify.go:220] Checking for updates...
	I0422 12:02:22.043613   56551 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 12:02:22.045500   56551 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18711-7633/kubeconfig
	I0422 12:02:22.047097   56551 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 12:02:22.048481   56551 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 12:02:22.050151   56551 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 12:02:22.052047   56551 config.go:182] Loaded profile config "pause-253908": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 12:02:22.052582   56551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 12:02:22.052626   56551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 12:02:22.068494   56551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40629
	I0422 12:02:22.068931   56551 main.go:141] libmachine: () Calling .GetVersion
	I0422 12:02:22.069456   56551 main.go:141] libmachine: Using API Version  1
	I0422 12:02:22.069474   56551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 12:02:22.069797   56551 main.go:141] libmachine: () Calling .GetMachineName
	I0422 12:02:22.069983   56551 main.go:141] libmachine: (pause-253908) Calling .DriverName
	I0422 12:02:22.070323   56551 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 12:02:22.070674   56551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 12:02:22.070712   56551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 12:02:22.085867   56551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39777
	I0422 12:02:22.086251   56551 main.go:141] libmachine: () Calling .GetVersion
	I0422 12:02:22.086736   56551 main.go:141] libmachine: Using API Version  1
	I0422 12:02:22.086758   56551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 12:02:22.087096   56551 main.go:141] libmachine: () Calling .GetMachineName
	I0422 12:02:22.087306   56551 main.go:141] libmachine: (pause-253908) Calling .DriverName
	I0422 12:02:22.120222   56551 out.go:177] * Using the kvm2 driver based on existing profile
	I0422 12:02:22.121611   56551 start.go:297] selected driver: kvm2
	I0422 12:02:22.121626   56551 start.go:901] validating driver "kvm2" against &{Name:pause-253908 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pa
use-253908 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-
security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 12:02:22.121767   56551 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 12:02:22.122084   56551 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 12:02:22.122162   56551 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18711-7633/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 12:02:22.136728   56551 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 12:02:22.137427   56551 cni.go:84] Creating CNI manager for ""
	I0422 12:02:22.137443   56551 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 12:02:22.137497   56551 start.go:340] cluster config:
	{Name:pause-253908 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-253908 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false re
gistry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 12:02:22.137617   56551 iso.go:125] acquiring lock: {Name:mkb6ac9fd17ffabc92a94047094130aad6203a95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 12:02:22.139686   56551 out.go:177] * Starting "pause-253908" primary control-plane node in "pause-253908" cluster
	I0422 12:02:22.141210   56551 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 12:02:22.141251   56551 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0422 12:02:22.141273   56551 cache.go:56] Caching tarball of preloaded images
	I0422 12:02:22.141361   56551 preload.go:173] Found /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 12:02:22.141373   56551 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 12:02:22.141484   56551 profile.go:143] Saving config to /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/pause-253908/config.json ...
	I0422 12:02:22.141725   56551 start.go:360] acquireMachinesLock for pause-253908: {Name:mk5cb9b294e703b264c1f97ac968ffd01e93b576 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 12:03:02.155578   56551 start.go:364] duration metric: took 40.013827426s to acquireMachinesLock for "pause-253908"
	I0422 12:03:02.155622   56551 start.go:96] Skipping create...Using existing machine configuration
	I0422 12:03:02.155628   56551 fix.go:54] fixHost starting: 
	I0422 12:03:02.156148   56551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 12:03:02.156215   56551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 12:03:02.176662   56551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40141
	I0422 12:03:02.177196   56551 main.go:141] libmachine: () Calling .GetVersion
	I0422 12:03:02.177793   56551 main.go:141] libmachine: Using API Version  1
	I0422 12:03:02.177822   56551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 12:03:02.178168   56551 main.go:141] libmachine: () Calling .GetMachineName
	I0422 12:03:02.178348   56551 main.go:141] libmachine: (pause-253908) Calling .DriverName
	I0422 12:03:02.178507   56551 main.go:141] libmachine: (pause-253908) Calling .GetState
	I0422 12:03:02.180788   56551 fix.go:112] recreateIfNeeded on pause-253908: state=Running err=<nil>
	W0422 12:03:02.180812   56551 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 12:03:02.183005   56551 out.go:177] * Updating the running kvm2 "pause-253908" VM ...
	I0422 12:03:02.184540   56551 machine.go:94] provisionDockerMachine start ...
	I0422 12:03:02.184573   56551 main.go:141] libmachine: (pause-253908) Calling .DriverName
	I0422 12:03:02.184836   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHHostname
	I0422 12:03:02.188043   56551 main.go:141] libmachine: (pause-253908) DBG | domain pause-253908 has defined MAC address 52:54:00:c5:01:84 in network mk-pause-253908
	I0422 12:03:02.188826   56551 main.go:141] libmachine: (pause-253908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:01:84", ip: ""} in network mk-pause-253908: {Iface:virbr2 ExpiryTime:2024-04-22 13:00:56 +0000 UTC Type:0 Mac:52:54:00:c5:01:84 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:pause-253908 Clientid:01:52:54:00:c5:01:84}
	I0422 12:03:02.188851   56551 main.go:141] libmachine: (pause-253908) DBG | domain pause-253908 has defined IP address 192.168.50.32 and MAC address 52:54:00:c5:01:84 in network mk-pause-253908
	I0422 12:03:02.189133   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHPort
	I0422 12:03:02.189320   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHKeyPath
	I0422 12:03:02.189556   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHKeyPath
	I0422 12:03:02.189716   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHUsername
	I0422 12:03:02.189881   56551 main.go:141] libmachine: Using SSH client type: native
	I0422 12:03:02.190108   56551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0422 12:03:02.190124   56551 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 12:03:02.312167   56551 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-253908
	
	I0422 12:03:02.312190   56551 main.go:141] libmachine: (pause-253908) Calling .GetMachineName
	I0422 12:03:02.312493   56551 buildroot.go:166] provisioning hostname "pause-253908"
	I0422 12:03:02.312526   56551 main.go:141] libmachine: (pause-253908) Calling .GetMachineName
	I0422 12:03:02.312744   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHHostname
	I0422 12:03:02.315507   56551 main.go:141] libmachine: (pause-253908) DBG | domain pause-253908 has defined MAC address 52:54:00:c5:01:84 in network mk-pause-253908
	I0422 12:03:02.315995   56551 main.go:141] libmachine: (pause-253908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:01:84", ip: ""} in network mk-pause-253908: {Iface:virbr2 ExpiryTime:2024-04-22 13:00:56 +0000 UTC Type:0 Mac:52:54:00:c5:01:84 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:pause-253908 Clientid:01:52:54:00:c5:01:84}
	I0422 12:03:02.316026   56551 main.go:141] libmachine: (pause-253908) DBG | domain pause-253908 has defined IP address 192.168.50.32 and MAC address 52:54:00:c5:01:84 in network mk-pause-253908
	I0422 12:03:02.316221   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHPort
	I0422 12:03:02.316438   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHKeyPath
	I0422 12:03:02.316615   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHKeyPath
	I0422 12:03:02.316867   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHUsername
	I0422 12:03:02.317096   56551 main.go:141] libmachine: Using SSH client type: native
	I0422 12:03:02.317345   56551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0422 12:03:02.317367   56551 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-253908 && echo "pause-253908" | sudo tee /etc/hostname
	I0422 12:03:02.461875   56551 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-253908
	
	I0422 12:03:02.461905   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHHostname
	I0422 12:03:02.465583   56551 main.go:141] libmachine: (pause-253908) DBG | domain pause-253908 has defined MAC address 52:54:00:c5:01:84 in network mk-pause-253908
	I0422 12:03:02.466040   56551 main.go:141] libmachine: (pause-253908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:01:84", ip: ""} in network mk-pause-253908: {Iface:virbr2 ExpiryTime:2024-04-22 13:00:56 +0000 UTC Type:0 Mac:52:54:00:c5:01:84 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:pause-253908 Clientid:01:52:54:00:c5:01:84}
	I0422 12:03:02.466080   56551 main.go:141] libmachine: (pause-253908) DBG | domain pause-253908 has defined IP address 192.168.50.32 and MAC address 52:54:00:c5:01:84 in network mk-pause-253908
	I0422 12:03:02.466344   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHPort
	I0422 12:03:02.466530   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHKeyPath
	I0422 12:03:02.466692   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHKeyPath
	I0422 12:03:02.466876   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHUsername
	I0422 12:03:02.467127   56551 main.go:141] libmachine: Using SSH client type: native
	I0422 12:03:02.467378   56551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0422 12:03:02.467400   56551 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-253908' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-253908/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-253908' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 12:03:02.583910   56551 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 12:03:02.583941   56551 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18711-7633/.minikube CaCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18711-7633/.minikube}
	I0422 12:03:02.583965   56551 buildroot.go:174] setting up certificates
	I0422 12:03:02.583976   56551 provision.go:84] configureAuth start
	I0422 12:03:02.584014   56551 main.go:141] libmachine: (pause-253908) Calling .GetMachineName
	I0422 12:03:02.584305   56551 main.go:141] libmachine: (pause-253908) Calling .GetIP
	I0422 12:03:02.587764   56551 main.go:141] libmachine: (pause-253908) DBG | domain pause-253908 has defined MAC address 52:54:00:c5:01:84 in network mk-pause-253908
	I0422 12:03:02.588189   56551 main.go:141] libmachine: (pause-253908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:01:84", ip: ""} in network mk-pause-253908: {Iface:virbr2 ExpiryTime:2024-04-22 13:00:56 +0000 UTC Type:0 Mac:52:54:00:c5:01:84 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:pause-253908 Clientid:01:52:54:00:c5:01:84}
	I0422 12:03:02.588218   56551 main.go:141] libmachine: (pause-253908) DBG | domain pause-253908 has defined IP address 192.168.50.32 and MAC address 52:54:00:c5:01:84 in network mk-pause-253908
	I0422 12:03:02.588418   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHHostname
	I0422 12:03:02.591082   56551 main.go:141] libmachine: (pause-253908) DBG | domain pause-253908 has defined MAC address 52:54:00:c5:01:84 in network mk-pause-253908
	I0422 12:03:02.591531   56551 main.go:141] libmachine: (pause-253908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:01:84", ip: ""} in network mk-pause-253908: {Iface:virbr2 ExpiryTime:2024-04-22 13:00:56 +0000 UTC Type:0 Mac:52:54:00:c5:01:84 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:pause-253908 Clientid:01:52:54:00:c5:01:84}
	I0422 12:03:02.591570   56551 main.go:141] libmachine: (pause-253908) DBG | domain pause-253908 has defined IP address 192.168.50.32 and MAC address 52:54:00:c5:01:84 in network mk-pause-253908
	I0422 12:03:02.591839   56551 provision.go:143] copyHostCerts
	I0422 12:03:02.591900   56551 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem, removing ...
	I0422 12:03:02.591912   56551 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem
	I0422 12:03:02.591968   56551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/ca.pem (1078 bytes)
	I0422 12:03:02.592067   56551 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem, removing ...
	I0422 12:03:02.592080   56551 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem
	I0422 12:03:02.592108   56551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/cert.pem (1123 bytes)
	I0422 12:03:02.592178   56551 exec_runner.go:144] found /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem, removing ...
	I0422 12:03:02.592190   56551 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem
	I0422 12:03:02.592216   56551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18711-7633/.minikube/key.pem (1679 bytes)
	I0422 12:03:02.592284   56551 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem org=jenkins.pause-253908 san=[127.0.0.1 192.168.50.32 localhost minikube pause-253908]
	I0422 12:03:02.866062   56551 provision.go:177] copyRemoteCerts
	I0422 12:03:02.866137   56551 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 12:03:02.866170   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHHostname
	I0422 12:03:02.869436   56551 main.go:141] libmachine: (pause-253908) DBG | domain pause-253908 has defined MAC address 52:54:00:c5:01:84 in network mk-pause-253908
	I0422 12:03:02.869904   56551 main.go:141] libmachine: (pause-253908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:01:84", ip: ""} in network mk-pause-253908: {Iface:virbr2 ExpiryTime:2024-04-22 13:00:56 +0000 UTC Type:0 Mac:52:54:00:c5:01:84 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:pause-253908 Clientid:01:52:54:00:c5:01:84}
	I0422 12:03:02.869941   56551 main.go:141] libmachine: (pause-253908) DBG | domain pause-253908 has defined IP address 192.168.50.32 and MAC address 52:54:00:c5:01:84 in network mk-pause-253908
	I0422 12:03:02.870311   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHPort
	I0422 12:03:02.870493   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHKeyPath
	I0422 12:03:02.870685   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHUsername
	I0422 12:03:02.870830   56551 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/pause-253908/id_rsa Username:docker}
	I0422 12:03:02.959645   56551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 12:03:03.006335   56551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0422 12:03:03.041515   56551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 12:03:03.072656   56551 provision.go:87] duration metric: took 488.669356ms to configureAuth
	I0422 12:03:03.072685   56551 buildroot.go:189] setting minikube options for container-runtime
	I0422 12:03:03.072958   56551 config.go:182] Loaded profile config "pause-253908": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 12:03:03.073043   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHHostname
	I0422 12:03:03.076331   56551 main.go:141] libmachine: (pause-253908) DBG | domain pause-253908 has defined MAC address 52:54:00:c5:01:84 in network mk-pause-253908
	I0422 12:03:03.076856   56551 main.go:141] libmachine: (pause-253908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:01:84", ip: ""} in network mk-pause-253908: {Iface:virbr2 ExpiryTime:2024-04-22 13:00:56 +0000 UTC Type:0 Mac:52:54:00:c5:01:84 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:pause-253908 Clientid:01:52:54:00:c5:01:84}
	I0422 12:03:03.076896   56551 main.go:141] libmachine: (pause-253908) DBG | domain pause-253908 has defined IP address 192.168.50.32 and MAC address 52:54:00:c5:01:84 in network mk-pause-253908
	I0422 12:03:03.077134   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHPort
	I0422 12:03:03.077347   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHKeyPath
	I0422 12:03:03.077534   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHKeyPath
	I0422 12:03:03.077693   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHUsername
	I0422 12:03:03.077898   56551 main.go:141] libmachine: Using SSH client type: native
	I0422 12:03:03.078135   56551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0422 12:03:03.078158   56551 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 12:03:08.752862   56551 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 12:03:08.752901   56551 machine.go:97] duration metric: took 6.568345105s to provisionDockerMachine
	I0422 12:03:08.752915   56551 start.go:293] postStartSetup for "pause-253908" (driver="kvm2")
	I0422 12:03:08.752928   56551 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 12:03:08.752969   56551 main.go:141] libmachine: (pause-253908) Calling .DriverName
	I0422 12:03:08.753413   56551 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 12:03:08.753446   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHHostname
	I0422 12:03:08.756284   56551 main.go:141] libmachine: (pause-253908) DBG | domain pause-253908 has defined MAC address 52:54:00:c5:01:84 in network mk-pause-253908
	I0422 12:03:08.756641   56551 main.go:141] libmachine: (pause-253908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:01:84", ip: ""} in network mk-pause-253908: {Iface:virbr2 ExpiryTime:2024-04-22 13:00:56 +0000 UTC Type:0 Mac:52:54:00:c5:01:84 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:pause-253908 Clientid:01:52:54:00:c5:01:84}
	I0422 12:03:08.756667   56551 main.go:141] libmachine: (pause-253908) DBG | domain pause-253908 has defined IP address 192.168.50.32 and MAC address 52:54:00:c5:01:84 in network mk-pause-253908
	I0422 12:03:08.756851   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHPort
	I0422 12:03:08.757053   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHKeyPath
	I0422 12:03:08.757233   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHUsername
	I0422 12:03:08.757440   56551 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/pause-253908/id_rsa Username:docker}
	I0422 12:03:08.849656   56551 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 12:03:08.855763   56551 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 12:03:08.855794   56551 filesync.go:126] Scanning /home/jenkins/minikube-integration/18711-7633/.minikube/addons for local assets ...
	I0422 12:03:08.855859   56551 filesync.go:126] Scanning /home/jenkins/minikube-integration/18711-7633/.minikube/files for local assets ...
	I0422 12:03:08.855979   56551 filesync.go:149] local asset: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem -> 149452.pem in /etc/ssl/certs
	I0422 12:03:08.856106   56551 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 12:03:08.868849   56551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem --> /etc/ssl/certs/149452.pem (1708 bytes)
	I0422 12:03:08.899308   56551 start.go:296] duration metric: took 146.378291ms for postStartSetup
	I0422 12:03:08.899355   56551 fix.go:56] duration metric: took 6.743726352s for fixHost
	I0422 12:03:08.899379   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHHostname
	I0422 12:03:08.902476   56551 main.go:141] libmachine: (pause-253908) DBG | domain pause-253908 has defined MAC address 52:54:00:c5:01:84 in network mk-pause-253908
	I0422 12:03:08.902856   56551 main.go:141] libmachine: (pause-253908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:01:84", ip: ""} in network mk-pause-253908: {Iface:virbr2 ExpiryTime:2024-04-22 13:00:56 +0000 UTC Type:0 Mac:52:54:00:c5:01:84 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:pause-253908 Clientid:01:52:54:00:c5:01:84}
	I0422 12:03:08.902884   56551 main.go:141] libmachine: (pause-253908) DBG | domain pause-253908 has defined IP address 192.168.50.32 and MAC address 52:54:00:c5:01:84 in network mk-pause-253908
	I0422 12:03:08.903035   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHPort
	I0422 12:03:08.903223   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHKeyPath
	I0422 12:03:08.903498   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHKeyPath
	I0422 12:03:08.903619   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHUsername
	I0422 12:03:08.903795   56551 main.go:141] libmachine: Using SSH client type: native
	I0422 12:03:08.903952   56551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0422 12:03:08.903962   56551 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0422 12:03:09.014451   56551 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713787389.011923738
	
	I0422 12:03:09.014473   56551 fix.go:216] guest clock: 1713787389.011923738
	I0422 12:03:09.014482   56551 fix.go:229] Guest: 2024-04-22 12:03:09.011923738 +0000 UTC Remote: 2024-04-22 12:03:08.899360216 +0000 UTC m=+46.911313728 (delta=112.563522ms)
	I0422 12:03:09.014506   56551 fix.go:200] guest clock delta is within tolerance: 112.563522ms
	I0422 12:03:09.014512   56551 start.go:83] releasing machines lock for "pause-253908", held for 6.85891014s
	I0422 12:03:09.014538   56551 main.go:141] libmachine: (pause-253908) Calling .DriverName
	I0422 12:03:09.014820   56551 main.go:141] libmachine: (pause-253908) Calling .GetIP
	I0422 12:03:09.018176   56551 main.go:141] libmachine: (pause-253908) DBG | domain pause-253908 has defined MAC address 52:54:00:c5:01:84 in network mk-pause-253908
	I0422 12:03:09.018588   56551 main.go:141] libmachine: (pause-253908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:01:84", ip: ""} in network mk-pause-253908: {Iface:virbr2 ExpiryTime:2024-04-22 13:00:56 +0000 UTC Type:0 Mac:52:54:00:c5:01:84 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:pause-253908 Clientid:01:52:54:00:c5:01:84}
	I0422 12:03:09.018618   56551 main.go:141] libmachine: (pause-253908) DBG | domain pause-253908 has defined IP address 192.168.50.32 and MAC address 52:54:00:c5:01:84 in network mk-pause-253908
	I0422 12:03:09.018773   56551 main.go:141] libmachine: (pause-253908) Calling .DriverName
	I0422 12:03:09.019419   56551 main.go:141] libmachine: (pause-253908) Calling .DriverName
	I0422 12:03:09.019611   56551 main.go:141] libmachine: (pause-253908) Calling .DriverName
	I0422 12:03:09.019724   56551 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 12:03:09.019771   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHHostname
	I0422 12:03:09.019838   56551 ssh_runner.go:195] Run: cat /version.json
	I0422 12:03:09.019868   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHHostname
	I0422 12:03:09.027528   56551 main.go:141] libmachine: (pause-253908) DBG | domain pause-253908 has defined MAC address 52:54:00:c5:01:84 in network mk-pause-253908
	I0422 12:03:09.027635   56551 main.go:141] libmachine: (pause-253908) DBG | domain pause-253908 has defined MAC address 52:54:00:c5:01:84 in network mk-pause-253908
	I0422 12:03:09.028077   56551 main.go:141] libmachine: (pause-253908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:01:84", ip: ""} in network mk-pause-253908: {Iface:virbr2 ExpiryTime:2024-04-22 13:00:56 +0000 UTC Type:0 Mac:52:54:00:c5:01:84 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:pause-253908 Clientid:01:52:54:00:c5:01:84}
	I0422 12:03:09.028103   56551 main.go:141] libmachine: (pause-253908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:01:84", ip: ""} in network mk-pause-253908: {Iface:virbr2 ExpiryTime:2024-04-22 13:00:56 +0000 UTC Type:0 Mac:52:54:00:c5:01:84 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:pause-253908 Clientid:01:52:54:00:c5:01:84}
	I0422 12:03:09.028122   56551 main.go:141] libmachine: (pause-253908) DBG | domain pause-253908 has defined IP address 192.168.50.32 and MAC address 52:54:00:c5:01:84 in network mk-pause-253908
	I0422 12:03:09.028139   56551 main.go:141] libmachine: (pause-253908) DBG | domain pause-253908 has defined IP address 192.168.50.32 and MAC address 52:54:00:c5:01:84 in network mk-pause-253908
	I0422 12:03:09.028297   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHPort
	I0422 12:03:09.028510   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHKeyPath
	I0422 12:03:09.028535   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHPort
	I0422 12:03:09.028744   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHUsername
	I0422 12:03:09.028753   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHKeyPath
	I0422 12:03:09.028964   56551 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/pause-253908/id_rsa Username:docker}
	I0422 12:03:09.028993   56551 main.go:141] libmachine: (pause-253908) Calling .GetSSHUsername
	I0422 12:03:09.029150   56551 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/pause-253908/id_rsa Username:docker}
	I0422 12:03:09.115843   56551 ssh_runner.go:195] Run: systemctl --version
	I0422 12:03:09.142596   56551 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 12:03:09.321209   56551 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 12:03:09.331846   56551 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 12:03:09.331929   56551 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 12:03:09.343159   56551 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0422 12:03:09.343189   56551 start.go:494] detecting cgroup driver to use...
	I0422 12:03:09.343261   56551 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 12:03:09.364081   56551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 12:03:09.380395   56551 docker.go:217] disabling cri-docker service (if available) ...
	I0422 12:03:09.380456   56551 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 12:03:09.397477   56551 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 12:03:09.413987   56551 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 12:03:09.588913   56551 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 12:03:09.764172   56551 docker.go:233] disabling docker service ...
	I0422 12:03:09.764247   56551 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 12:03:09.786563   56551 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 12:03:09.803579   56551 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 12:03:09.971667   56551 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 12:03:10.146875   56551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 12:03:10.165811   56551 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 12:03:10.194839   56551 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 12:03:10.194902   56551 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 12:03:10.208806   56551 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 12:03:10.208886   56551 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 12:03:10.224502   56551 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 12:03:10.236574   56551 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 12:03:10.248660   56551 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 12:03:10.262170   56551 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 12:03:10.275608   56551 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 12:03:10.293048   56551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 12:03:10.309400   56551 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 12:03:10.323705   56551 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 12:03:10.337686   56551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 12:03:10.609586   56551 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 12:03:11.261854   56551 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 12:03:11.261925   56551 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 12:03:11.268558   56551 start.go:562] Will wait 60s for crictl version
	I0422 12:03:11.268606   56551 ssh_runner.go:195] Run: which crictl
	I0422 12:03:11.275064   56551 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 12:03:11.330930   56551 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 12:03:11.331013   56551 ssh_runner.go:195] Run: crio --version
	I0422 12:03:11.378120   56551 ssh_runner.go:195] Run: crio --version
	I0422 12:03:11.424819   56551 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 12:03:11.426207   56551 main.go:141] libmachine: (pause-253908) Calling .GetIP
	I0422 12:03:11.429341   56551 main.go:141] libmachine: (pause-253908) DBG | domain pause-253908 has defined MAC address 52:54:00:c5:01:84 in network mk-pause-253908
	I0422 12:03:11.429699   56551 main.go:141] libmachine: (pause-253908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:01:84", ip: ""} in network mk-pause-253908: {Iface:virbr2 ExpiryTime:2024-04-22 13:00:56 +0000 UTC Type:0 Mac:52:54:00:c5:01:84 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:pause-253908 Clientid:01:52:54:00:c5:01:84}
	I0422 12:03:11.429729   56551 main.go:141] libmachine: (pause-253908) DBG | domain pause-253908 has defined IP address 192.168.50.32 and MAC address 52:54:00:c5:01:84 in network mk-pause-253908
	I0422 12:03:11.429908   56551 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0422 12:03:11.435252   56551 kubeadm.go:877] updating cluster {Name:pause-253908 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-253908 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy
:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 12:03:11.435421   56551 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 12:03:11.435490   56551 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 12:03:11.492813   56551 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 12:03:11.492838   56551 crio.go:433] Images already preloaded, skipping extraction
	I0422 12:03:11.492895   56551 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 12:03:11.595334   56551 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 12:03:11.595359   56551 cache_images.go:84] Images are preloaded, skipping loading
	I0422 12:03:11.595380   56551 kubeadm.go:928] updating node { 192.168.50.32 8443 v1.30.0 crio true true} ...
	I0422 12:03:11.595539   56551 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-253908 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:pause-253908 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 12:03:11.595637   56551 ssh_runner.go:195] Run: crio config
	I0422 12:03:11.839605   56551 cni.go:84] Creating CNI manager for ""
	I0422 12:03:11.839640   56551 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 12:03:11.839657   56551 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 12:03:11.839697   56551 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.32 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-253908 NodeName:pause-253908 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 12:03:11.839910   56551 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-253908"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 12:03:11.839983   56551 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 12:03:11.892991   56551 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 12:03:11.893062   56551 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 12:03:11.961199   56551 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0422 12:03:12.022500   56551 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 12:03:12.105572   56551 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0422 12:03:12.163087   56551 ssh_runner.go:195] Run: grep 192.168.50.32	control-plane.minikube.internal$ /etc/hosts
	I0422 12:03:12.185139   56551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 12:03:12.453616   56551 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 12:03:12.479851   56551 certs.go:68] Setting up /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/pause-253908 for IP: 192.168.50.32
	I0422 12:03:12.479875   56551 certs.go:194] generating shared ca certs ...
	I0422 12:03:12.479895   56551 certs.go:226] acquiring lock for ca certs: {Name:mk0b77082b88c771d0b00be5267ca31dfee6f85a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 12:03:12.480064   56551 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key
	I0422 12:03:12.480127   56551 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key
	I0422 12:03:12.480141   56551 certs.go:256] generating profile certs ...
	I0422 12:03:12.480247   56551 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/pause-253908/client.key
	I0422 12:03:12.480335   56551 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/pause-253908/apiserver.key.7746cd1c
	I0422 12:03:12.480387   56551 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/pause-253908/proxy-client.key
	I0422 12:03:12.480519   56551 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem (1338 bytes)
	W0422 12:03:12.480560   56551 certs.go:480] ignoring /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945_empty.pem, impossibly tiny 0 bytes
	I0422 12:03:12.480573   56551 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca-key.pem (1679 bytes)
	I0422 12:03:12.480604   56551 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/ca.pem (1078 bytes)
	I0422 12:03:12.480635   56551 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/cert.pem (1123 bytes)
	I0422 12:03:12.480663   56551 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/certs/key.pem (1679 bytes)
	I0422 12:03:12.480713   56551 certs.go:484] found cert: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem (1708 bytes)
	I0422 12:03:12.481557   56551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 12:03:12.529516   56551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 12:03:12.641663   56551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 12:03:12.682027   56551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0422 12:03:12.713879   56551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/pause-253908/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0422 12:03:12.744989   56551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/pause-253908/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0422 12:03:12.774144   56551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/pause-253908/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 12:03:12.804538   56551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/pause-253908/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 12:03:12.851440   56551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/certs/14945.pem --> /usr/share/ca-certificates/14945.pem (1338 bytes)
	I0422 12:03:12.887420   56551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/ssl/certs/149452.pem --> /usr/share/ca-certificates/149452.pem (1708 bytes)
	I0422 12:03:12.923978   56551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18711-7633/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 12:03:12.980478   56551 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 12:03:13.015889   56551 ssh_runner.go:195] Run: openssl version
	I0422 12:03:13.027764   56551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 12:03:13.045165   56551 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 12:03:13.050536   56551 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 10:38 /usr/share/ca-certificates/minikubeCA.pem
	I0422 12:03:13.050609   56551 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 12:03:13.057895   56551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 12:03:13.075729   56551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14945.pem && ln -fs /usr/share/ca-certificates/14945.pem /etc/ssl/certs/14945.pem"
	I0422 12:03:13.103479   56551 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14945.pem
	I0422 12:03:13.111040   56551 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 10:51 /usr/share/ca-certificates/14945.pem
	I0422 12:03:13.111126   56551 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14945.pem
	I0422 12:03:13.119497   56551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14945.pem /etc/ssl/certs/51391683.0"
	I0422 12:03:13.136268   56551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149452.pem && ln -fs /usr/share/ca-certificates/149452.pem /etc/ssl/certs/149452.pem"
	I0422 12:03:13.155338   56551 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149452.pem
	I0422 12:03:13.161574   56551 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 10:51 /usr/share/ca-certificates/149452.pem
	I0422 12:03:13.161651   56551 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149452.pem
	I0422 12:03:13.171508   56551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149452.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 12:03:13.188708   56551 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 12:03:13.195258   56551 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 12:03:13.205832   56551 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 12:03:13.213707   56551 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 12:03:13.223091   56551 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 12:03:13.232731   56551 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 12:03:13.242572   56551 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 12:03:13.250494   56551 kubeadm.go:391] StartCluster: {Name:pause-253908 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-253908 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:fa
lse portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 12:03:13.250665   56551 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 12:03:13.250725   56551 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 12:03:13.333130   56551 cri.go:89] found id: "cf90c223253c36939ade0b1ead99909a2fa6f369ccb5853e200ec72b758cb4b0"
	I0422 12:03:13.333155   56551 cri.go:89] found id: "3bed2d08de3cfa628fa16aea071722f4bb8fcc3457bcf689579e8ac0a6954ed8"
	I0422 12:03:13.333160   56551 cri.go:89] found id: "0813fabc9fa6ba17b6726b6d8301a1540805164f9f3a949ad51582a197097d2b"
	I0422 12:03:13.333164   56551 cri.go:89] found id: "eb1145b77344e2c000127bdd6c9597830f1d5a68c2e0f5c258f7b7976c489984"
	I0422 12:03:13.333168   56551 cri.go:89] found id: "f018015c8868103b2edcf518d46e9d2d743f80366180fd28df34e13a3f133e66"
	I0422 12:03:13.333173   56551 cri.go:89] found id: "a7bee21e7ee8ce7721257dece87176aa1e69cb7ecf64a97acbe105b8dc8a8101"
	I0422 12:03:13.333177   56551 cri.go:89] found id: "c85b0b4fb2ae515dd9ca5c77f4209d6cb8984b16b4c7abe738e8a5136d778ab9"
	I0422 12:03:13.333181   56551 cri.go:89] found id: "10292279919eb6c1448bb627e87ede89b1c25d2553ec671ebd6fbaba662fafcf"
	I0422 12:03:13.333185   56551 cri.go:89] found id: "5c7576386d2495d9c30c1bd0fa44854f6537911a27b365b1a8a09554d984ff78"
	I0422 12:03:13.333192   56551 cri.go:89] found id: ""
	I0422 12:03:13.333252   56551 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-253908 -n pause-253908
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-253908 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-253908 logs -n 25: (1.622102617s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-230092 sudo                                | cilium-230092             | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-230092 sudo                                | cilium-230092             | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-230092 sudo cat                            | cilium-230092             | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-230092 sudo cat                            | cilium-230092             | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-230092 sudo                                | cilium-230092             | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-230092 sudo                                | cilium-230092             | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-230092 sudo                                | cilium-230092             | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-230092 sudo cat                            | cilium-230092             | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-230092 sudo cat                            | cilium-230092             | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-230092 sudo                                | cilium-230092             | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-230092 sudo                                | cilium-230092             | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-230092 sudo                                | cilium-230092             | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-230092 sudo find                           | cilium-230092             | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-230092 sudo crio                           | cilium-230092             | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-230092                                     | cilium-230092             | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC | 22 Apr 24 12:01 UTC |
	| start   | -p cert-expiration-454029                            | cert-expiration-454029    | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC | 22 Apr 24 12:02 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-262232                          | force-systemd-env-262232  | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC | 22 Apr 24 12:01 UTC |
	| start   | -p force-systemd-flag-905296                         | force-systemd-flag-905296 | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC | 22 Apr 24 12:03 UTC |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-483459                               | NoKubernetes-483459       | jenkins | v1.33.0 | 22 Apr 24 12:02 UTC | 22 Apr 24 12:03 UTC |
	|         | --no-kubernetes --driver=kvm2                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p pause-253908                                      | pause-253908              | jenkins | v1.33.0 | 22 Apr 24 12:02 UTC | 22 Apr 24 12:03 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-483459                               | NoKubernetes-483459       | jenkins | v1.33.0 | 22 Apr 24 12:03 UTC | 22 Apr 24 12:03 UTC |
	| start   | -p NoKubernetes-483459                               | NoKubernetes-483459       | jenkins | v1.33.0 | 22 Apr 24 12:03 UTC |                     |
	|         | --no-kubernetes --driver=kvm2                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-905296 ssh cat                    | force-systemd-flag-905296 | jenkins | v1.33.0 | 22 Apr 24 12:03 UTC | 22 Apr 24 12:03 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                   |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-905296                         | force-systemd-flag-905296 | jenkins | v1.33.0 | 22 Apr 24 12:03 UTC | 22 Apr 24 12:03 UTC |
	| start   | -p running-upgrade-307156                            | minikube                  | jenkins | v1.26.0 | 22 Apr 24 12:03 UTC |                     |
	|         | --memory=2200 --vm-driver=kvm2                       |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 12:03:20
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.18.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 12:03:20.466357   57341 out.go:296] Setting OutFile to fd 1 ...
	I0422 12:03:20.466545   57341 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0422 12:03:20.466550   57341 out.go:309] Setting ErrFile to fd 2...
	I0422 12:03:20.466556   57341 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0422 12:03:20.467432   57341 root.go:329] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
	I0422 12:03:20.467983   57341 out.go:303] Setting JSON to false
	I0422 12:03:20.469338   57341 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6344,"bootTime":1713781057,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 12:03:20.469416   57341 start.go:125] virtualization: kvm guest
	I0422 12:03:20.472883   57341 out.go:177] * [running-upgrade-307156] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 12:03:20.474864   57341 out.go:177]   - MINIKUBE_LOCATION=18711
	I0422 12:03:20.474903   57341 notify.go:193] Checking for updates...
	I0422 12:03:20.477962   57341 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 12:03:20.479421   57341 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 12:03:20.480754   57341 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 12:03:20.482028   57341 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 12:03:20.483250   57341 out.go:177]   - KUBECONFIG=/tmp/legacy_kubeconfig3970519147
	I0422 12:03:20.486842   57341 config.go:178] Loaded profile config "NoKubernetes-483459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0422 12:03:20.487336   57341 config.go:178] Loaded profile config "cert-expiration-454029": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 12:03:20.487562   57341 config.go:178] Loaded profile config "pause-253908": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 12:03:20.487727   57341 driver.go:360] Setting default libvirt URI to qemu:///system
	I0422 12:03:20.527345   57341 out.go:177] * Using the kvm2 driver based on user configuration
	I0422 12:03:20.529030   57341 start.go:284] selected driver: kvm2
	I0422 12:03:20.529042   57341 start.go:805] validating driver "kvm2" against <nil>
	I0422 12:03:20.529065   57341 start.go:816] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 12:03:20.530087   57341 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 12:03:20.530387   57341 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18711-7633/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 12:03:20.546781   57341 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 12:03:20.546870   57341 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0422 12:03:20.547145   57341 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0422 12:03:20.547183   57341 cni.go:95] Creating CNI manager for ""
	I0422 12:03:20.547192   57341 cni.go:165] "kvm2" driver + crio runtime found, recommending bridge
	I0422 12:03:20.547231   57341 start_flags.go:305] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0422 12:03:20.547240   57341 start_flags.go:310] config:
	{Name:running-upgrade-307156 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-307156 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0422 12:03:20.547360   57341 iso.go:128] acquiring lock: {Name:mk0c56e2dcd5f26df497ba87732f90c76c5965a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 12:03:20.550446   57341 out.go:177] * Downloading VM boot image ...
	I0422 12:03:20.551967   57341 download.go:101] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18711-7633/.minikube/cache/iso/amd64/minikube-v1.26.0-amd64.iso
	I0422 12:03:20.659308   57341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/last_update_check: {Name:mk9b4930b7746fe1fe003dc6692c783af5901ed9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 12:03:20.661622   57341 out.go:177] * minikube 1.33.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.33.0
	I0422 12:03:20.663132   57341 out.go:177] * To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	I0422 12:03:17.214249   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | domain NoKubernetes-483459 has defined MAC address 52:54:00:70:98:b9 in network mk-NoKubernetes-483459
	I0422 12:03:17.214656   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | unable to find current IP address of domain NoKubernetes-483459 in network mk-NoKubernetes-483459
	I0422 12:03:17.214679   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | I0422 12:03:17.214615   57074 retry.go:31] will retry after 1.580522427s: waiting for machine to come up
	I0422 12:03:18.796306   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | domain NoKubernetes-483459 has defined MAC address 52:54:00:70:98:b9 in network mk-NoKubernetes-483459
	I0422 12:03:18.796916   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | unable to find current IP address of domain NoKubernetes-483459 in network mk-NoKubernetes-483459
	I0422 12:03:18.796928   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | I0422 12:03:18.796849   57074 retry.go:31] will retry after 1.850880239s: waiting for machine to come up
	I0422 12:03:20.649937   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | domain NoKubernetes-483459 has defined MAC address 52:54:00:70:98:b9 in network mk-NoKubernetes-483459
	I0422 12:03:20.650511   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | unable to find current IP address of domain NoKubernetes-483459 in network mk-NoKubernetes-483459
	I0422 12:03:20.650535   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | I0422 12:03:20.650457   57074 retry.go:31] will retry after 3.329779267s: waiting for machine to come up
	I0422 12:03:17.430723   56551 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8443/healthz ...
	I0422 12:03:20.205954   56551 api_server.go:279] https://192.168.50.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0422 12:03:20.205991   56551 api_server.go:103] status: https://192.168.50.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0422 12:03:20.206005   56551 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8443/healthz ...
	I0422 12:03:20.226043   56551 api_server.go:279] https://192.168.50.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0422 12:03:20.226071   56551 api_server.go:103] status: https://192.168.50.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0422 12:03:20.434505   56551 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8443/healthz ...
	I0422 12:03:20.441203   56551 api_server.go:279] https://192.168.50.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 12:03:20.441246   56551 api_server.go:103] status: https://192.168.50.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 12:03:20.930543   56551 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8443/healthz ...
	I0422 12:03:20.943204   56551 api_server.go:279] https://192.168.50.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 12:03:20.943237   56551 api_server.go:103] status: https://192.168.50.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 12:03:21.430566   56551 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8443/healthz ...
	I0422 12:03:21.442402   56551 api_server.go:279] https://192.168.50.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 12:03:21.442438   56551 api_server.go:103] status: https://192.168.50.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 12:03:21.930899   56551 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8443/healthz ...
	I0422 12:03:21.941408   56551 api_server.go:279] https://192.168.50.32:8443/healthz returned 200:
	ok
	I0422 12:03:21.960276   56551 api_server.go:141] control plane version: v1.30.0
	I0422 12:03:21.960316   56551 api_server.go:131] duration metric: took 5.029935217s to wait for apiserver health ...
	I0422 12:03:21.960329   56551 cni.go:84] Creating CNI manager for ""
	I0422 12:03:21.960340   56551 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 12:03:21.962657   56551 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 12:03:21.964298   56551 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 12:03:21.982072   56551 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 12:03:22.015149   56551 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 12:03:23.981972   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | domain NoKubernetes-483459 has defined MAC address 52:54:00:70:98:b9 in network mk-NoKubernetes-483459
	I0422 12:03:23.982416   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | unable to find current IP address of domain NoKubernetes-483459 in network mk-NoKubernetes-483459
	I0422 12:03:23.982438   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | I0422 12:03:23.982367   57074 retry.go:31] will retry after 2.834314172s: waiting for machine to come up
	I0422 12:03:22.038773   56551 system_pods.go:59] 6 kube-system pods found
	I0422 12:03:22.038812   56551 system_pods.go:61] "coredns-7db6d8ff4d-fzqm5" [6ea190b9-5bd9-4674-a3b8-bdc5a570f968] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 12:03:22.038822   56551 system_pods.go:61] "etcd-pause-253908" [e91f3f14-3334-4ab6-af81-8c11d6532aba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0422 12:03:22.038837   56551 system_pods.go:61] "kube-apiserver-pause-253908" [89ac120e-0b2e-44dc-86a1-abd298f25ce4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0422 12:03:22.038846   56551 system_pods.go:61] "kube-controller-manager-pause-253908" [5ffc7275-ee98-4872-8b9e-5c572501f969] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0422 12:03:22.038858   56551 system_pods.go:61] "kube-proxy-g6k67" [24d6aa75-1782-4786-89e8-922591a81986] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0422 12:03:22.038866   56551 system_pods.go:61] "kube-scheduler-pause-253908" [8c569828-617e-45d0-86f1-520ccb84fc47] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0422 12:03:22.038875   56551 system_pods.go:74] duration metric: took 23.697976ms to wait for pod list to return data ...
	I0422 12:03:22.038893   56551 node_conditions.go:102] verifying NodePressure condition ...
	I0422 12:03:22.042947   56551 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 12:03:22.042972   56551 node_conditions.go:123] node cpu capacity is 2
	I0422 12:03:22.042983   56551 node_conditions.go:105] duration metric: took 4.085244ms to run NodePressure ...
	I0422 12:03:22.043002   56551 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 12:03:22.422034   56551 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0422 12:03:22.428252   56551 kubeadm.go:733] kubelet initialised
	I0422 12:03:22.428276   56551 kubeadm.go:734] duration metric: took 6.214177ms waiting for restarted kubelet to initialise ...
	I0422 12:03:22.428287   56551 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 12:03:22.434050   56551 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-fzqm5" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:24.441665   56551 pod_ready.go:102] pod "coredns-7db6d8ff4d-fzqm5" in "kube-system" namespace has status "Ready":"False"
	I0422 12:03:26.442684   56551 pod_ready.go:92] pod "coredns-7db6d8ff4d-fzqm5" in "kube-system" namespace has status "Ready":"True"
	I0422 12:03:26.442709   56551 pod_ready.go:81] duration metric: took 4.008631391s for pod "coredns-7db6d8ff4d-fzqm5" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:26.442723   56551 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-253908" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:29.215500   57341 out.go:177] * Starting control plane node running-upgrade-307156 in cluster running-upgrade-307156
	I0422 12:03:29.217247   57341 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0422 12:03:29.326892   57341 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.1/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0422 12:03:29.326915   57341 cache.go:57] Caching tarball of preloaded images
	I0422 12:03:29.327164   57341 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0422 12:03:29.329206   57341 out.go:177] * Downloading Kubernetes v1.24.1 preload ...
	I0422 12:03:29.330573   57341 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 ...
	I0422 12:03:29.446981   57341 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.1/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:4c8ad2429eafc79a0e5a20bdf41ae0bc -> /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0422 12:03:26.818092   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | domain NoKubernetes-483459 has defined MAC address 52:54:00:70:98:b9 in network mk-NoKubernetes-483459
	I0422 12:03:26.818652   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | unable to find current IP address of domain NoKubernetes-483459 in network mk-NoKubernetes-483459
	I0422 12:03:26.818676   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | I0422 12:03:26.818608   57074 retry.go:31] will retry after 3.603059437s: waiting for machine to come up
	I0422 12:03:30.422901   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | domain NoKubernetes-483459 has defined MAC address 52:54:00:70:98:b9 in network mk-NoKubernetes-483459
	I0422 12:03:30.423437   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | unable to find current IP address of domain NoKubernetes-483459 in network mk-NoKubernetes-483459
	I0422 12:03:30.423460   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | I0422 12:03:30.423387   57074 retry.go:31] will retry after 6.23834071s: waiting for machine to come up
	I0422 12:03:27.950996   56551 pod_ready.go:92] pod "etcd-pause-253908" in "kube-system" namespace has status "Ready":"True"
	I0422 12:03:27.951024   56551 pod_ready.go:81] duration metric: took 1.508291694s for pod "etcd-pause-253908" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:27.951038   56551 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-253908" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:27.956527   56551 pod_ready.go:92] pod "kube-apiserver-pause-253908" in "kube-system" namespace has status "Ready":"True"
	I0422 12:03:27.956550   56551 pod_ready.go:81] duration metric: took 5.502667ms for pod "kube-apiserver-pause-253908" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:27.956561   56551 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-253908" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:27.963296   56551 pod_ready.go:92] pod "kube-controller-manager-pause-253908" in "kube-system" namespace has status "Ready":"True"
	I0422 12:03:27.963323   56551 pod_ready.go:81] duration metric: took 6.752295ms for pod "kube-controller-manager-pause-253908" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:27.963337   56551 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-g6k67" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:27.971973   56551 pod_ready.go:92] pod "kube-proxy-g6k67" in "kube-system" namespace has status "Ready":"True"
	I0422 12:03:27.971995   56551 pod_ready.go:81] duration metric: took 8.651011ms for pod "kube-proxy-g6k67" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:27.972004   56551 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-253908" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:29.978838   56551 pod_ready.go:102] pod "kube-scheduler-pause-253908" in "kube-system" namespace has status "Ready":"False"
	I0422 12:03:31.980267   56551 pod_ready.go:92] pod "kube-scheduler-pause-253908" in "kube-system" namespace has status "Ready":"True"
	I0422 12:03:31.980289   56551 pod_ready.go:81] duration metric: took 4.008279691s for pod "kube-scheduler-pause-253908" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:31.980297   56551 pod_ready.go:38] duration metric: took 9.551999261s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 12:03:31.980320   56551 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 12:03:31.995218   56551 ops.go:34] apiserver oom_adj: -16
	I0422 12:03:31.995240   56551 kubeadm.go:591] duration metric: took 18.568866736s to restartPrimaryControlPlane
	I0422 12:03:31.995251   56551 kubeadm.go:393] duration metric: took 18.744771729s to StartCluster
	I0422 12:03:31.995278   56551 settings.go:142] acquiring lock: {Name:mkd680667f0df4166491741d55b55ac111bb0138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 12:03:31.995356   56551 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18711-7633/kubeconfig
	I0422 12:03:31.996637   56551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/kubeconfig: {Name:mkee6de4c6906fe5621e8aeac858a93219648db5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 12:03:31.996926   56551 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 12:03:31.998948   56551 out.go:177] * Verifying Kubernetes components...
	I0422 12:03:31.996993   56551 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0422 12:03:31.997281   56551 config.go:182] Loaded profile config "pause-253908": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 12:03:32.000506   56551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 12:03:32.002385   56551 out.go:177] * Enabled addons: 
	I0422 12:03:32.003934   56551 addons.go:505] duration metric: took 6.957838ms for enable addons: enabled=[]
	I0422 12:03:32.179183   56551 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 12:03:32.204051   56551 node_ready.go:35] waiting up to 6m0s for node "pause-253908" to be "Ready" ...
	I0422 12:03:32.207198   56551 node_ready.go:49] node "pause-253908" has status "Ready":"True"
	I0422 12:03:32.207217   56551 node_ready.go:38] duration metric: took 3.132304ms for node "pause-253908" to be "Ready" ...
	I0422 12:03:32.207227   56551 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 12:03:32.213177   56551 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fzqm5" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:32.218404   56551 pod_ready.go:92] pod "coredns-7db6d8ff4d-fzqm5" in "kube-system" namespace has status "Ready":"True"
	I0422 12:03:32.218423   56551 pod_ready.go:81] duration metric: took 5.22158ms for pod "coredns-7db6d8ff4d-fzqm5" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:32.218431   56551 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-253908" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:32.440108   56551 pod_ready.go:92] pod "etcd-pause-253908" in "kube-system" namespace has status "Ready":"True"
	I0422 12:03:32.440131   56551 pod_ready.go:81] duration metric: took 221.694706ms for pod "etcd-pause-253908" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:32.440141   56551 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-253908" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:32.839827   56551 pod_ready.go:92] pod "kube-apiserver-pause-253908" in "kube-system" namespace has status "Ready":"True"
	I0422 12:03:32.839852   56551 pod_ready.go:81] duration metric: took 399.704925ms for pod "kube-apiserver-pause-253908" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:32.839862   56551 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-253908" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:33.241979   56551 pod_ready.go:92] pod "kube-controller-manager-pause-253908" in "kube-system" namespace has status "Ready":"True"
	I0422 12:03:33.242012   56551 pod_ready.go:81] duration metric: took 402.142448ms for pod "kube-controller-manager-pause-253908" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:33.242027   56551 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g6k67" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:33.641645   56551 pod_ready.go:92] pod "kube-proxy-g6k67" in "kube-system" namespace has status "Ready":"True"
	I0422 12:03:33.641680   56551 pod_ready.go:81] duration metric: took 399.644678ms for pod "kube-proxy-g6k67" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:33.641696   56551 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-253908" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:34.039339   56551 pod_ready.go:92] pod "kube-scheduler-pause-253908" in "kube-system" namespace has status "Ready":"True"
	I0422 12:03:34.039366   56551 pod_ready.go:81] duration metric: took 397.661927ms for pod "kube-scheduler-pause-253908" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:34.039375   56551 pod_ready.go:38] duration metric: took 1.832136963s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 12:03:34.039392   56551 api_server.go:52] waiting for apiserver process to appear ...
	I0422 12:03:34.039455   56551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 12:03:34.059295   56551 api_server.go:72] duration metric: took 2.06233036s to wait for apiserver process to appear ...
	I0422 12:03:34.059323   56551 api_server.go:88] waiting for apiserver healthz status ...
	I0422 12:03:34.059345   56551 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8443/healthz ...
	I0422 12:03:34.065066   56551 api_server.go:279] https://192.168.50.32:8443/healthz returned 200:
	ok
	I0422 12:03:34.066126   56551 api_server.go:141] control plane version: v1.30.0
	I0422 12:03:34.066146   56551 api_server.go:131] duration metric: took 6.817333ms to wait for apiserver health ...
	I0422 12:03:34.066154   56551 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 12:03:34.242544   56551 system_pods.go:59] 6 kube-system pods found
	I0422 12:03:34.242572   56551 system_pods.go:61] "coredns-7db6d8ff4d-fzqm5" [6ea190b9-5bd9-4674-a3b8-bdc5a570f968] Running
	I0422 12:03:34.242576   56551 system_pods.go:61] "etcd-pause-253908" [e91f3f14-3334-4ab6-af81-8c11d6532aba] Running
	I0422 12:03:34.242585   56551 system_pods.go:61] "kube-apiserver-pause-253908" [89ac120e-0b2e-44dc-86a1-abd298f25ce4] Running
	I0422 12:03:34.242589   56551 system_pods.go:61] "kube-controller-manager-pause-253908" [5ffc7275-ee98-4872-8b9e-5c572501f969] Running
	I0422 12:03:34.242592   56551 system_pods.go:61] "kube-proxy-g6k67" [24d6aa75-1782-4786-89e8-922591a81986] Running
	I0422 12:03:34.242595   56551 system_pods.go:61] "kube-scheduler-pause-253908" [8c569828-617e-45d0-86f1-520ccb84fc47] Running
	I0422 12:03:34.242600   56551 system_pods.go:74] duration metric: took 176.442257ms to wait for pod list to return data ...
	I0422 12:03:34.242608   56551 default_sa.go:34] waiting for default service account to be created ...
	I0422 12:03:34.439760   56551 default_sa.go:45] found service account: "default"
	I0422 12:03:34.439786   56551 default_sa.go:55] duration metric: took 197.172392ms for default service account to be created ...
	I0422 12:03:34.439795   56551 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 12:03:34.642739   56551 system_pods.go:86] 6 kube-system pods found
	I0422 12:03:34.642797   56551 system_pods.go:89] "coredns-7db6d8ff4d-fzqm5" [6ea190b9-5bd9-4674-a3b8-bdc5a570f968] Running
	I0422 12:03:34.642805   56551 system_pods.go:89] "etcd-pause-253908" [e91f3f14-3334-4ab6-af81-8c11d6532aba] Running
	I0422 12:03:34.642813   56551 system_pods.go:89] "kube-apiserver-pause-253908" [89ac120e-0b2e-44dc-86a1-abd298f25ce4] Running
	I0422 12:03:34.642820   56551 system_pods.go:89] "kube-controller-manager-pause-253908" [5ffc7275-ee98-4872-8b9e-5c572501f969] Running
	I0422 12:03:34.642826   56551 system_pods.go:89] "kube-proxy-g6k67" [24d6aa75-1782-4786-89e8-922591a81986] Running
	I0422 12:03:34.642837   56551 system_pods.go:89] "kube-scheduler-pause-253908" [8c569828-617e-45d0-86f1-520ccb84fc47] Running
	I0422 12:03:34.642852   56551 system_pods.go:126] duration metric: took 203.050874ms to wait for k8s-apps to be running ...
	I0422 12:03:34.642861   56551 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 12:03:34.642923   56551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 12:03:34.664107   56551 system_svc.go:56] duration metric: took 21.23486ms WaitForService to wait for kubelet
	I0422 12:03:34.664137   56551 kubeadm.go:576] duration metric: took 2.667178463s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 12:03:34.664165   56551 node_conditions.go:102] verifying NodePressure condition ...
	I0422 12:03:34.839687   56551 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 12:03:34.839717   56551 node_conditions.go:123] node cpu capacity is 2
	I0422 12:03:34.839731   56551 node_conditions.go:105] duration metric: took 175.559846ms to run NodePressure ...
	I0422 12:03:34.839746   56551 start.go:240] waiting for startup goroutines ...
	I0422 12:03:34.839755   56551 start.go:245] waiting for cluster config update ...
	I0422 12:03:34.839766   56551 start.go:254] writing updated cluster config ...
	I0422 12:03:34.840139   56551 ssh_runner.go:195] Run: rm -f paused
	I0422 12:03:34.895958   56551 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0422 12:03:34.898239   56551 out.go:177] * Done! kubectl is now configured to use "pause-253908" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 22 12:03:35 pause-253908 crio[2373]: time="2024-04-22 12:03:35.670024951Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713787415669997198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0427c4fe-72e1-4802-8ca6-07f8ad66ad6b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 12:03:35 pause-253908 crio[2373]: time="2024-04-22 12:03:35.671031200Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=39d0302c-19eb-4c7d-ab28-16dd38f482eb name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:03:35 pause-253908 crio[2373]: time="2024-04-22 12:03:35.671352654Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=39d0302c-19eb-4c7d-ab28-16dd38f482eb name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:03:35 pause-253908 crio[2373]: time="2024-04-22 12:03:35.673302720Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:523eda3c2ab8f80eff05dd7d1e88881e2a78abe174394518a282c0dd4a1e820d,PodSandboxId:49678c14a817da2971dad324e45aa10c38e46bbed10391da1639e621e48abad2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713787401752579561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzqm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea190b9-5bd9-4674-a3b8-bdc5a570f968,},Annotations:map[string]string{io.kubernetes.container.hash: 291d2b05,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fc8cafc02966b54e54268a2d4c374f58c975dadb9aed0080ac448d01833570a,PodSandboxId:dfbc0ab2d6067610a97488007717cf658c6ce0ad89b8d174639b6f5b3b891ef9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713787401403510512,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g6k67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 24d6aa75-1782-4786-89e8-922591a81986,},Annotations:map[string]string{io.kubernetes.container.hash: d69e02e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e899a93d79f4f434c3090ae4f521cf4a77cf11d0d86c0fc39f8a3a859de477b,PodSandboxId:a273e2e2eeee48c1a08864beb2b6c9021fd0eb8c9da5226b4d20909fe4a57910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713787396647008145,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5ad272a14
93e5b272b1abb8c5c83078,},Annotations:map[string]string{io.kubernetes.container.hash: 1a2991e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981d2d6d63e8de7ba29edb2287de8176003035a044b6405a584c6b96084a41a0,PodSandboxId:870b9c8e48c332386ae5c1672618e11f719ffa35209e4b70222f58b9a34c9deb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713787396342621524,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb27f7952a7420d51aa5450257e
91e7,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3b6146205338e6a1e1d50c3496b68d7eba4009f113c0391f1e3aba0855932b0,PodSandboxId:45562200369ac50ce5c6b2e9cacc0d3881fa654c548a63e20616946b204e4f5e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713787396329504198,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a779bf320e79fd7a238207a9693e7ba,},Annotations:map[string]string{io.kubernet
es.container.hash: 5d9fc288,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd5c3990237627a631e36507aee77d29ea52967df0b6e46e51b0e06cd73a8a2c,PodSandboxId:5cd0ea90f87a902e27e894187b2d4f378a295fffe68a08eff1856b239e136445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713787396308812291,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba4d5352a7dd71669287409cca46b471,},Annotations:map[string]string{io
.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf90c223253c36939ade0b1ead99909a2fa6f369ccb5853e200ec72b758cb4b0,PodSandboxId:870b9c8e48c332386ae5c1672618e11f719ffa35209e4b70222f58b9a34c9deb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713787391959825155,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb27f7952a7420d51aa5450257e91e7,},Annotations:map[string]string{io.kubernetes.contain
er.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bed2d08de3cfa628fa16aea071722f4bb8fcc3457bcf689579e8ac0a6954ed8,PodSandboxId:45562200369ac50ce5c6b2e9cacc0d3881fa654c548a63e20616946b204e4f5e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713787391866967994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a779bf320e79fd7a238207a9693e7ba,},Annotations:map[string]string{io.kubernetes.container.hash: 5d9fc288,io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0813fabc9fa6ba17b6726b6d8301a1540805164f9f3a949ad51582a197097d2b,PodSandboxId:5cd0ea90f87a902e27e894187b2d4f378a295fffe68a08eff1856b239e136445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713787391836025181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba4d5352a7dd71669287409cca46b471,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1145b77344e2c000127bdd6c9597830f1d5a68c2e0f5c258f7b7976c489984,PodSandboxId:4560018c43edb5052a5cb74cd84ec02052ecace3b29f58d5387c92c69c427e8a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713787300781456327,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g6k67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d6aa75-1782-4786-89e8-922591a81986,},Annotations:map[string]string{io.kubernetes.container.hash: d69e02e5,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f018015c8868103b2edcf518d46e9d2d743f80366180fd28df34e13a3f133e66,PodSandboxId:0d4df0d32e1c8a6e049ecbf06bedcc9be3aefee9a99a2ce06ebffbfecc637632,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713787300567216896,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzqm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea190b9-5bd9-4674-a3b8-bdc5a570f968,},Annotations:map[string]string{io.kubernetes.container.hash: 291d2b05,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c85b0b4fb2ae515dd9ca5c77f4209d6cb8984b16b4c7abe738e8a5136d778ab9,PodSandboxId:515fd181b7d351a4c5a648d779142c910ebf0f7ba310412789e876c5062e14e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713787277096422946,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: e5ad272a1493e5b272b1abb8c5c83078,},Annotations:map[string]string{io.kubernetes.container.hash: 1a2991e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=39d0302c-19eb-4c7d-ab28-16dd38f482eb name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:03:35 pause-253908 crio[2373]: time="2024-04-22 12:03:35.725320497Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=21d25a7e-1c04-4e69-9c05-14ae852910a1 name=/runtime.v1.RuntimeService/Version
	Apr 22 12:03:35 pause-253908 crio[2373]: time="2024-04-22 12:03:35.725393883Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=21d25a7e-1c04-4e69-9c05-14ae852910a1 name=/runtime.v1.RuntimeService/Version
	Apr 22 12:03:35 pause-253908 crio[2373]: time="2024-04-22 12:03:35.727271346Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=15fd88f1-86fe-45a8-8ed6-ab81168286c5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 12:03:35 pause-253908 crio[2373]: time="2024-04-22 12:03:35.727645346Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713787415727621080,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=15fd88f1-86fe-45a8-8ed6-ab81168286c5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 12:03:35 pause-253908 crio[2373]: time="2024-04-22 12:03:35.728628634Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7607ab43-4d94-427a-9f2e-b7178d0fc9c4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:03:35 pause-253908 crio[2373]: time="2024-04-22 12:03:35.728746728Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7607ab43-4d94-427a-9f2e-b7178d0fc9c4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:03:35 pause-253908 crio[2373]: time="2024-04-22 12:03:35.729018899Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:523eda3c2ab8f80eff05dd7d1e88881e2a78abe174394518a282c0dd4a1e820d,PodSandboxId:49678c14a817da2971dad324e45aa10c38e46bbed10391da1639e621e48abad2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713787401752579561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzqm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea190b9-5bd9-4674-a3b8-bdc5a570f968,},Annotations:map[string]string{io.kubernetes.container.hash: 291d2b05,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fc8cafc02966b54e54268a2d4c374f58c975dadb9aed0080ac448d01833570a,PodSandboxId:dfbc0ab2d6067610a97488007717cf658c6ce0ad89b8d174639b6f5b3b891ef9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713787401403510512,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g6k67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 24d6aa75-1782-4786-89e8-922591a81986,},Annotations:map[string]string{io.kubernetes.container.hash: d69e02e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e899a93d79f4f434c3090ae4f521cf4a77cf11d0d86c0fc39f8a3a859de477b,PodSandboxId:a273e2e2eeee48c1a08864beb2b6c9021fd0eb8c9da5226b4d20909fe4a57910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713787396647008145,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5ad272a14
93e5b272b1abb8c5c83078,},Annotations:map[string]string{io.kubernetes.container.hash: 1a2991e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981d2d6d63e8de7ba29edb2287de8176003035a044b6405a584c6b96084a41a0,PodSandboxId:870b9c8e48c332386ae5c1672618e11f719ffa35209e4b70222f58b9a34c9deb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713787396342621524,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb27f7952a7420d51aa5450257e
91e7,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3b6146205338e6a1e1d50c3496b68d7eba4009f113c0391f1e3aba0855932b0,PodSandboxId:45562200369ac50ce5c6b2e9cacc0d3881fa654c548a63e20616946b204e4f5e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713787396329504198,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a779bf320e79fd7a238207a9693e7ba,},Annotations:map[string]string{io.kubernet
es.container.hash: 5d9fc288,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd5c3990237627a631e36507aee77d29ea52967df0b6e46e51b0e06cd73a8a2c,PodSandboxId:5cd0ea90f87a902e27e894187b2d4f378a295fffe68a08eff1856b239e136445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713787396308812291,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba4d5352a7dd71669287409cca46b471,},Annotations:map[string]string{io
.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf90c223253c36939ade0b1ead99909a2fa6f369ccb5853e200ec72b758cb4b0,PodSandboxId:870b9c8e48c332386ae5c1672618e11f719ffa35209e4b70222f58b9a34c9deb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713787391959825155,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb27f7952a7420d51aa5450257e91e7,},Annotations:map[string]string{io.kubernetes.contain
er.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bed2d08de3cfa628fa16aea071722f4bb8fcc3457bcf689579e8ac0a6954ed8,PodSandboxId:45562200369ac50ce5c6b2e9cacc0d3881fa654c548a63e20616946b204e4f5e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713787391866967994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a779bf320e79fd7a238207a9693e7ba,},Annotations:map[string]string{io.kubernetes.container.hash: 5d9fc288,io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0813fabc9fa6ba17b6726b6d8301a1540805164f9f3a949ad51582a197097d2b,PodSandboxId:5cd0ea90f87a902e27e894187b2d4f378a295fffe68a08eff1856b239e136445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713787391836025181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba4d5352a7dd71669287409cca46b471,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1145b77344e2c000127bdd6c9597830f1d5a68c2e0f5c258f7b7976c489984,PodSandboxId:4560018c43edb5052a5cb74cd84ec02052ecace3b29f58d5387c92c69c427e8a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713787300781456327,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g6k67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d6aa75-1782-4786-89e8-922591a81986,},Annotations:map[string]string{io.kubernetes.container.hash: d69e02e5,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f018015c8868103b2edcf518d46e9d2d743f80366180fd28df34e13a3f133e66,PodSandboxId:0d4df0d32e1c8a6e049ecbf06bedcc9be3aefee9a99a2ce06ebffbfecc637632,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713787300567216896,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzqm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea190b9-5bd9-4674-a3b8-bdc5a570f968,},Annotations:map[string]string{io.kubernetes.container.hash: 291d2b05,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c85b0b4fb2ae515dd9ca5c77f4209d6cb8984b16b4c7abe738e8a5136d778ab9,PodSandboxId:515fd181b7d351a4c5a648d779142c910ebf0f7ba310412789e876c5062e14e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713787277096422946,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: e5ad272a1493e5b272b1abb8c5c83078,},Annotations:map[string]string{io.kubernetes.container.hash: 1a2991e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7607ab43-4d94-427a-9f2e-b7178d0fc9c4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:03:35 pause-253908 crio[2373]: time="2024-04-22 12:03:35.783879973Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2e170c1e-4d94-413e-a14a-3a637ffadde3 name=/runtime.v1.RuntimeService/Version
	Apr 22 12:03:35 pause-253908 crio[2373]: time="2024-04-22 12:03:35.784012308Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2e170c1e-4d94-413e-a14a-3a637ffadde3 name=/runtime.v1.RuntimeService/Version
	Apr 22 12:03:35 pause-253908 crio[2373]: time="2024-04-22 12:03:35.786334188Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=640bbef6-2622-4b10-afb8-d52c60a50edb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 12:03:35 pause-253908 crio[2373]: time="2024-04-22 12:03:35.786873192Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713787415786846683,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=640bbef6-2622-4b10-afb8-d52c60a50edb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 12:03:35 pause-253908 crio[2373]: time="2024-04-22 12:03:35.787573970Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa8ee3d4-2381-4e75-96fb-49208ff9e4bf name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:03:35 pause-253908 crio[2373]: time="2024-04-22 12:03:35.787637906Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fa8ee3d4-2381-4e75-96fb-49208ff9e4bf name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:03:35 pause-253908 crio[2373]: time="2024-04-22 12:03:35.788003316Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:523eda3c2ab8f80eff05dd7d1e88881e2a78abe174394518a282c0dd4a1e820d,PodSandboxId:49678c14a817da2971dad324e45aa10c38e46bbed10391da1639e621e48abad2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713787401752579561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzqm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea190b9-5bd9-4674-a3b8-bdc5a570f968,},Annotations:map[string]string{io.kubernetes.container.hash: 291d2b05,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fc8cafc02966b54e54268a2d4c374f58c975dadb9aed0080ac448d01833570a,PodSandboxId:dfbc0ab2d6067610a97488007717cf658c6ce0ad89b8d174639b6f5b3b891ef9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713787401403510512,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g6k67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 24d6aa75-1782-4786-89e8-922591a81986,},Annotations:map[string]string{io.kubernetes.container.hash: d69e02e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e899a93d79f4f434c3090ae4f521cf4a77cf11d0d86c0fc39f8a3a859de477b,PodSandboxId:a273e2e2eeee48c1a08864beb2b6c9021fd0eb8c9da5226b4d20909fe4a57910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713787396647008145,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5ad272a14
93e5b272b1abb8c5c83078,},Annotations:map[string]string{io.kubernetes.container.hash: 1a2991e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981d2d6d63e8de7ba29edb2287de8176003035a044b6405a584c6b96084a41a0,PodSandboxId:870b9c8e48c332386ae5c1672618e11f719ffa35209e4b70222f58b9a34c9deb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713787396342621524,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb27f7952a7420d51aa5450257e
91e7,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3b6146205338e6a1e1d50c3496b68d7eba4009f113c0391f1e3aba0855932b0,PodSandboxId:45562200369ac50ce5c6b2e9cacc0d3881fa654c548a63e20616946b204e4f5e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713787396329504198,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a779bf320e79fd7a238207a9693e7ba,},Annotations:map[string]string{io.kubernet
es.container.hash: 5d9fc288,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd5c3990237627a631e36507aee77d29ea52967df0b6e46e51b0e06cd73a8a2c,PodSandboxId:5cd0ea90f87a902e27e894187b2d4f378a295fffe68a08eff1856b239e136445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713787396308812291,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba4d5352a7dd71669287409cca46b471,},Annotations:map[string]string{io
.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf90c223253c36939ade0b1ead99909a2fa6f369ccb5853e200ec72b758cb4b0,PodSandboxId:870b9c8e48c332386ae5c1672618e11f719ffa35209e4b70222f58b9a34c9deb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713787391959825155,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb27f7952a7420d51aa5450257e91e7,},Annotations:map[string]string{io.kubernetes.contain
er.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bed2d08de3cfa628fa16aea071722f4bb8fcc3457bcf689579e8ac0a6954ed8,PodSandboxId:45562200369ac50ce5c6b2e9cacc0d3881fa654c548a63e20616946b204e4f5e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713787391866967994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a779bf320e79fd7a238207a9693e7ba,},Annotations:map[string]string{io.kubernetes.container.hash: 5d9fc288,io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0813fabc9fa6ba17b6726b6d8301a1540805164f9f3a949ad51582a197097d2b,PodSandboxId:5cd0ea90f87a902e27e894187b2d4f378a295fffe68a08eff1856b239e136445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713787391836025181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba4d5352a7dd71669287409cca46b471,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1145b77344e2c000127bdd6c9597830f1d5a68c2e0f5c258f7b7976c489984,PodSandboxId:4560018c43edb5052a5cb74cd84ec02052ecace3b29f58d5387c92c69c427e8a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713787300781456327,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g6k67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d6aa75-1782-4786-89e8-922591a81986,},Annotations:map[string]string{io.kubernetes.container.hash: d69e02e5,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f018015c8868103b2edcf518d46e9d2d743f80366180fd28df34e13a3f133e66,PodSandboxId:0d4df0d32e1c8a6e049ecbf06bedcc9be3aefee9a99a2ce06ebffbfecc637632,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713787300567216896,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzqm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea190b9-5bd9-4674-a3b8-bdc5a570f968,},Annotations:map[string]string{io.kubernetes.container.hash: 291d2b05,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c85b0b4fb2ae515dd9ca5c77f4209d6cb8984b16b4c7abe738e8a5136d778ab9,PodSandboxId:515fd181b7d351a4c5a648d779142c910ebf0f7ba310412789e876c5062e14e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713787277096422946,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: e5ad272a1493e5b272b1abb8c5c83078,},Annotations:map[string]string{io.kubernetes.container.hash: 1a2991e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fa8ee3d4-2381-4e75-96fb-49208ff9e4bf name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:03:35 pause-253908 crio[2373]: time="2024-04-22 12:03:35.838312034Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=171b81e4-b5f4-44d2-ad17-77d80608da0d name=/runtime.v1.RuntimeService/Version
	Apr 22 12:03:35 pause-253908 crio[2373]: time="2024-04-22 12:03:35.838426046Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=171b81e4-b5f4-44d2-ad17-77d80608da0d name=/runtime.v1.RuntimeService/Version
	Apr 22 12:03:35 pause-253908 crio[2373]: time="2024-04-22 12:03:35.841331415Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3b96b4fa-4cb7-43d0-ac3a-6c670c8a29e5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 12:03:35 pause-253908 crio[2373]: time="2024-04-22 12:03:35.841882913Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713787415841854610,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3b96b4fa-4cb7-43d0-ac3a-6c670c8a29e5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 12:03:35 pause-253908 crio[2373]: time="2024-04-22 12:03:35.844872468Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e82a2ca9-a3cb-452f-a258-98ae3020f884 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:03:35 pause-253908 crio[2373]: time="2024-04-22 12:03:35.844957791Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e82a2ca9-a3cb-452f-a258-98ae3020f884 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:03:35 pause-253908 crio[2373]: time="2024-04-22 12:03:35.845287734Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:523eda3c2ab8f80eff05dd7d1e88881e2a78abe174394518a282c0dd4a1e820d,PodSandboxId:49678c14a817da2971dad324e45aa10c38e46bbed10391da1639e621e48abad2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713787401752579561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzqm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea190b9-5bd9-4674-a3b8-bdc5a570f968,},Annotations:map[string]string{io.kubernetes.container.hash: 291d2b05,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fc8cafc02966b54e54268a2d4c374f58c975dadb9aed0080ac448d01833570a,PodSandboxId:dfbc0ab2d6067610a97488007717cf658c6ce0ad89b8d174639b6f5b3b891ef9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713787401403510512,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g6k67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 24d6aa75-1782-4786-89e8-922591a81986,},Annotations:map[string]string{io.kubernetes.container.hash: d69e02e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e899a93d79f4f434c3090ae4f521cf4a77cf11d0d86c0fc39f8a3a859de477b,PodSandboxId:a273e2e2eeee48c1a08864beb2b6c9021fd0eb8c9da5226b4d20909fe4a57910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713787396647008145,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5ad272a14
93e5b272b1abb8c5c83078,},Annotations:map[string]string{io.kubernetes.container.hash: 1a2991e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981d2d6d63e8de7ba29edb2287de8176003035a044b6405a584c6b96084a41a0,PodSandboxId:870b9c8e48c332386ae5c1672618e11f719ffa35209e4b70222f58b9a34c9deb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713787396342621524,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb27f7952a7420d51aa5450257e
91e7,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3b6146205338e6a1e1d50c3496b68d7eba4009f113c0391f1e3aba0855932b0,PodSandboxId:45562200369ac50ce5c6b2e9cacc0d3881fa654c548a63e20616946b204e4f5e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713787396329504198,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a779bf320e79fd7a238207a9693e7ba,},Annotations:map[string]string{io.kubernet
es.container.hash: 5d9fc288,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd5c3990237627a631e36507aee77d29ea52967df0b6e46e51b0e06cd73a8a2c,PodSandboxId:5cd0ea90f87a902e27e894187b2d4f378a295fffe68a08eff1856b239e136445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713787396308812291,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba4d5352a7dd71669287409cca46b471,},Annotations:map[string]string{io
.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf90c223253c36939ade0b1ead99909a2fa6f369ccb5853e200ec72b758cb4b0,PodSandboxId:870b9c8e48c332386ae5c1672618e11f719ffa35209e4b70222f58b9a34c9deb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713787391959825155,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb27f7952a7420d51aa5450257e91e7,},Annotations:map[string]string{io.kubernetes.contain
er.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bed2d08de3cfa628fa16aea071722f4bb8fcc3457bcf689579e8ac0a6954ed8,PodSandboxId:45562200369ac50ce5c6b2e9cacc0d3881fa654c548a63e20616946b204e4f5e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713787391866967994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a779bf320e79fd7a238207a9693e7ba,},Annotations:map[string]string{io.kubernetes.container.hash: 5d9fc288,io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0813fabc9fa6ba17b6726b6d8301a1540805164f9f3a949ad51582a197097d2b,PodSandboxId:5cd0ea90f87a902e27e894187b2d4f378a295fffe68a08eff1856b239e136445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713787391836025181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba4d5352a7dd71669287409cca46b471,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1145b77344e2c000127bdd6c9597830f1d5a68c2e0f5c258f7b7976c489984,PodSandboxId:4560018c43edb5052a5cb74cd84ec02052ecace3b29f58d5387c92c69c427e8a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713787300781456327,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g6k67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d6aa75-1782-4786-89e8-922591a81986,},Annotations:map[string]string{io.kubernetes.container.hash: d69e02e5,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f018015c8868103b2edcf518d46e9d2d743f80366180fd28df34e13a3f133e66,PodSandboxId:0d4df0d32e1c8a6e049ecbf06bedcc9be3aefee9a99a2ce06ebffbfecc637632,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713787300567216896,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzqm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea190b9-5bd9-4674-a3b8-bdc5a570f968,},Annotations:map[string]string{io.kubernetes.container.hash: 291d2b05,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c85b0b4fb2ae515dd9ca5c77f4209d6cb8984b16b4c7abe738e8a5136d778ab9,PodSandboxId:515fd181b7d351a4c5a648d779142c910ebf0f7ba310412789e876c5062e14e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713787277096422946,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: e5ad272a1493e5b272b1abb8c5c83078,},Annotations:map[string]string{io.kubernetes.container.hash: 1a2991e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e82a2ca9-a3cb-452f-a258-98ae3020f884 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	523eda3c2ab8f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 seconds ago       Running             coredns                   1                   49678c14a817d       coredns-7db6d8ff4d-fzqm5
	2fc8cafc02966       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   14 seconds ago       Running             kube-proxy                1                   dfbc0ab2d6067       kube-proxy-g6k67
	1e899a93d79f4       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   19 seconds ago       Running             kube-apiserver            1                   a273e2e2eeee4       kube-apiserver-pause-253908
	981d2d6d63e8d       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   19 seconds ago       Running             kube-scheduler            2                   870b9c8e48c33       kube-scheduler-pause-253908
	c3b6146205338       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   19 seconds ago       Running             etcd                      2                   45562200369ac       etcd-pause-253908
	cd5c399023762       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   19 seconds ago       Running             kube-controller-manager   2                   5cd0ea90f87a9       kube-controller-manager-pause-253908
	cf90c223253c3       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   23 seconds ago       Exited              kube-scheduler            1                   870b9c8e48c33       kube-scheduler-pause-253908
	3bed2d08de3cf       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   24 seconds ago       Exited              etcd                      1                   45562200369ac       etcd-pause-253908
	0813fabc9fa6b       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   24 seconds ago       Exited              kube-controller-manager   1                   5cd0ea90f87a9       kube-controller-manager-pause-253908
	eb1145b77344e       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   About a minute ago   Exited              kube-proxy                0                   4560018c43edb       kube-proxy-g6k67
	f018015c88681       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   0d4df0d32e1c8       coredns-7db6d8ff4d-fzqm5
	c85b0b4fb2ae5       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   2 minutes ago        Exited              kube-apiserver            0                   515fd181b7d35       kube-apiserver-pause-253908
	
	
	==> coredns [523eda3c2ab8f80eff05dd7d1e88881e2a78abe174394518a282c0dd4a1e820d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51788 - 61020 "HINFO IN 8314347899284632768.7451899747668725525. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014768681s
	
	
	==> coredns [f018015c8868103b2edcf518d46e9d2d743f80366180fd28df34e13a3f133e66] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45907 - 65095 "HINFO IN 1888660009761205459.1580346811320470488. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017106059s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[658685555]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Apr-2024 12:01:40.948) (total time: 30001ms):
	Trace[658685555]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:02:10.948)
	Trace[658685555]: [30.001599167s] [30.001599167s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1260799927]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Apr-2024 12:01:40.949) (total time: 30000ms):
	Trace[1260799927]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:02:10.950)
	Trace[1260799927]: [30.000723878s] [30.000723878s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2139704370]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Apr-2024 12:01:40.948) (total time: 30002ms):
	Trace[2139704370]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (12:02:10.950)
	Trace[2139704370]: [30.00248972s] [30.00248972s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-253908
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-253908
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437
	                    minikube.k8s.io/name=pause-253908
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_22T12_01_23_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 12:01:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-253908
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 12:03:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 12:03:20 +0000   Mon, 22 Apr 2024 12:01:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 12:03:20 +0000   Mon, 22 Apr 2024 12:01:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 12:03:20 +0000   Mon, 22 Apr 2024 12:01:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 12:03:20 +0000   Mon, 22 Apr 2024 12:01:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.32
	  Hostname:    pause-253908
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 99d1c1cbc20b4119ab7bbfbfdcdf3e8f
	  System UUID:                99d1c1cb-c20b-4119-ab7b-bfbfdcdf3e8f
	  Boot ID:                    1f247bb8-64c6-4486-8498-04e55a6609e9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-fzqm5                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     118s
	  kube-system                 etcd-pause-253908                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m13s
	  kube-system                 kube-apiserver-pause-253908             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m13s
	  kube-system                 kube-controller-manager-pause-253908    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m13s
	  kube-system                 kube-proxy-g6k67                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 kube-scheduler-pause-253908             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 115s                   kube-proxy       
	  Normal  Starting                 14s                    kube-proxy       
	  Normal  Starting                 2m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m20s (x8 over 2m20s)  kubelet          Node pause-253908 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m20s (x8 over 2m20s)  kubelet          Node pause-253908 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m20s (x7 over 2m20s)  kubelet          Node pause-253908 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m14s                  kubelet          Node pause-253908 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  2m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m14s                  kubelet          Node pause-253908 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m14s                  kubelet          Node pause-253908 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m14s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m12s                  kubelet          Node pause-253908 status is now: NodeReady
	  Normal  RegisteredNode           2m                     node-controller  Node pause-253908 event: Registered Node pause-253908 in Controller
	  Normal  Starting                 21s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)      kubelet          Node pause-253908 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)      kubelet          Node pause-253908 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)      kubelet          Node pause-253908 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                     node-controller  Node pause-253908 event: Registered Node pause-253908 in Controller
	
	
	==> dmesg <==
	[  +0.063896] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.076267] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.180697] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.157406] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.355194] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +5.112672] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.073888] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.963292] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.066265] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.532768] systemd-fstab-generator[1285]: Ignoring "noauto" option for root device
	[  +0.109624] kauditd_printk_skb: 69 callbacks suppressed
	[ +15.842737] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[  +0.130243] kauditd_printk_skb: 21 callbacks suppressed
	[Apr22 12:02] kauditd_printk_skb: 67 callbacks suppressed
	[Apr22 12:03] systemd-fstab-generator[2171]: Ignoring "noauto" option for root device
	[  +0.166234] systemd-fstab-generator[2183]: Ignoring "noauto" option for root device
	[  +0.211358] systemd-fstab-generator[2197]: Ignoring "noauto" option for root device
	[  +0.166855] systemd-fstab-generator[2211]: Ignoring "noauto" option for root device
	[  +0.393129] systemd-fstab-generator[2238]: Ignoring "noauto" option for root device
	[  +1.845556] systemd-fstab-generator[2712]: Ignoring "noauto" option for root device
	[  +3.314026] systemd-fstab-generator[2896]: Ignoring "noauto" option for root device
	[  +0.074157] kauditd_printk_skb: 170 callbacks suppressed
	[  +5.638034] kauditd_printk_skb: 46 callbacks suppressed
	[ +10.745933] systemd-fstab-generator[3525]: Ignoring "noauto" option for root device
	[  +0.113342] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [3bed2d08de3cfa628fa16aea071722f4bb8fcc3457bcf689579e8ac0a6954ed8] <==
	{"level":"info","ts":"2024-04-22T12:03:12.469965Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"fbd4dd8524dacdec","initial-advertise-peer-urls":["https://192.168.50.32:2380"],"listen-peer-urls":["https://192.168.50.32:2380"],"advertise-client-urls":["https://192.168.50.32:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.32:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-22T12:03:13.643231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-22T12:03:13.64329Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-22T12:03:13.643331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec received MsgPreVoteResp from fbd4dd8524dacdec at term 2"}
	{"level":"info","ts":"2024-04-22T12:03:13.643344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec became candidate at term 3"}
	{"level":"info","ts":"2024-04-22T12:03:13.643349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec received MsgVoteResp from fbd4dd8524dacdec at term 3"}
	{"level":"info","ts":"2024-04-22T12:03:13.643358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec became leader at term 3"}
	{"level":"info","ts":"2024-04-22T12:03:13.643366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fbd4dd8524dacdec elected leader fbd4dd8524dacdec at term 3"}
	{"level":"info","ts":"2024-04-22T12:03:13.650616Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"fbd4dd8524dacdec","local-member-attributes":"{Name:pause-253908 ClientURLs:[https://192.168.50.32:2379]}","request-path":"/0/members/fbd4dd8524dacdec/attributes","cluster-id":"2484c988a436b7d1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-22T12:03:13.650665Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T12:03:13.651251Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T12:03:13.652951Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.32:2379"}
	{"level":"info","ts":"2024-04-22T12:03:13.65461Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-22T12:03:13.65497Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-22T12:03:13.655017Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-22T12:03:14.159238Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-22T12:03:14.159354Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-253908","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.32:2380"],"advertise-client-urls":["https://192.168.50.32:2379"]}
	{"level":"warn","ts":"2024-04-22T12:03:14.159585Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.32:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T12:03:14.159608Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.32:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T12:03:14.159741Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T12:03:14.159827Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-22T12:03:14.161232Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"fbd4dd8524dacdec","current-leader-member-id":"fbd4dd8524dacdec"}
	{"level":"info","ts":"2024-04-22T12:03:14.167548Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.50.32:2380"}
	{"level":"info","ts":"2024-04-22T12:03:14.167917Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.50.32:2380"}
	{"level":"info","ts":"2024-04-22T12:03:14.16793Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-253908","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.32:2380"],"advertise-client-urls":["https://192.168.50.32:2379"]}
	
	
	==> etcd [c3b6146205338e6a1e1d50c3496b68d7eba4009f113c0391f1e3aba0855932b0] <==
	{"level":"info","ts":"2024-04-22T12:03:16.956245Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-22T12:03:16.956355Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-22T12:03:16.95673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec switched to configuration voters=(18146372362501279212)"}
	{"level":"info","ts":"2024-04-22T12:03:16.961323Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2484c988a436b7d1","local-member-id":"fbd4dd8524dacdec","added-peer-id":"fbd4dd8524dacdec","added-peer-peer-urls":["https://192.168.50.32:2380"]}
	{"level":"info","ts":"2024-04-22T12:03:16.961625Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2484c988a436b7d1","local-member-id":"fbd4dd8524dacdec","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T12:03:16.961628Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-22T12:03:16.963855Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"fbd4dd8524dacdec","initial-advertise-peer-urls":["https://192.168.50.32:2380"],"listen-peer-urls":["https://192.168.50.32:2380"],"advertise-client-urls":["https://192.168.50.32:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.32:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-22T12:03:16.963903Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-22T12:03:16.961695Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.32:2380"}
	{"level":"info","ts":"2024-04-22T12:03:16.963964Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.32:2380"}
	{"level":"info","ts":"2024-04-22T12:03:16.971316Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T12:03:18.592259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec is starting a new election at term 3"}
	{"level":"info","ts":"2024-04-22T12:03:18.592386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec became pre-candidate at term 3"}
	{"level":"info","ts":"2024-04-22T12:03:18.592461Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec received MsgPreVoteResp from fbd4dd8524dacdec at term 3"}
	{"level":"info","ts":"2024-04-22T12:03:18.592493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec became candidate at term 4"}
	{"level":"info","ts":"2024-04-22T12:03:18.592517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec received MsgVoteResp from fbd4dd8524dacdec at term 4"}
	{"level":"info","ts":"2024-04-22T12:03:18.592543Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec became leader at term 4"}
	{"level":"info","ts":"2024-04-22T12:03:18.592568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fbd4dd8524dacdec elected leader fbd4dd8524dacdec at term 4"}
	{"level":"info","ts":"2024-04-22T12:03:18.598037Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"fbd4dd8524dacdec","local-member-attributes":"{Name:pause-253908 ClientURLs:[https://192.168.50.32:2379]}","request-path":"/0/members/fbd4dd8524dacdec/attributes","cluster-id":"2484c988a436b7d1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-22T12:03:18.598269Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T12:03:18.598405Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T12:03:18.598985Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-22T12:03:18.599048Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-22T12:03:18.601687Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-22T12:03:18.601785Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.32:2379"}
	
	
	==> kernel <==
	 12:03:36 up 2 min,  0 users,  load average: 0.50, 0.30, 0.12
	Linux pause-253908 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1e899a93d79f4f434c3090ae4f521cf4a77cf11d0d86c0fc39f8a3a859de477b] <==
	I0422 12:03:20.221299       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0422 12:03:20.221724       1 aggregator.go:165] initial CRD sync complete...
	I0422 12:03:20.221787       1 autoregister_controller.go:141] Starting autoregister controller
	I0422 12:03:20.221795       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0422 12:03:20.258238       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0422 12:03:20.270108       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0422 12:03:20.270711       1 shared_informer.go:320] Caches are synced for configmaps
	I0422 12:03:20.270729       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0422 12:03:20.270744       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0422 12:03:20.274946       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0422 12:03:20.270756       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	E0422 12:03:20.300028       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0422 12:03:20.323210       1 cache.go:39] Caches are synced for autoregister controller
	I0422 12:03:20.333386       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0422 12:03:20.340080       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0422 12:03:20.340118       1 policy_source.go:224] refreshing policies
	I0422 12:03:20.421412       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0422 12:03:21.163120       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0422 12:03:22.261963       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0422 12:03:22.285101       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0422 12:03:22.354835       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0422 12:03:22.399517       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0422 12:03:22.410317       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0422 12:03:32.991129       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0422 12:03:33.138423       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [c85b0b4fb2ae515dd9ca5c77f4209d6cb8984b16b4c7abe738e8a5136d778ab9] <==
	I0422 12:01:38.250658       1 trace.go:236] Trace[1869007728]: "Create" accept:application/vnd.kubernetes.protobuf, */*,audit-id:b6e35568-8639-4b54-8c20-2bab9f1392f5,client:192.168.50.32,api-group:,api-version:v1,name:bootstrap-signer,subresource:token,namespace:kube-system,protocol:HTTP/2.0,resource:serviceaccounts,scope:resource,url:/api/v1/namespaces/kube-system/serviceaccounts/bootstrap-signer/token,user-agent:kube-controller-manager/v1.30.0 (linux/amd64) kubernetes/7c48c2b/kube-controller-manager,verb:POST (22-Apr-2024 12:01:36.855) (total time: 1395ms):
	Trace[1869007728]: ---"Write to database call succeeded" len:81 1395ms (12:01:38.250)
	Trace[1869007728]: [1.395266985s] [1.395266985s] END
	I0422 12:01:38.262687       1 trace.go:236] Trace[616800339]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:6eaf7e37-82a4-4c6a-9c7c-d0130551de8a,client:192.168.50.32,api-group:apps,api-version:v1,name:coredns,subresource:status,namespace:kube-system,protocol:HTTP/2.0,resource:deployments,scope:resource,url:/apis/apps/v1/namespaces/kube-system/deployments/coredns/status,user-agent:kube-controller-manager/v1.30.0 (linux/amd64) kubernetes/7c48c2b/system:serviceaccount:kube-system:deployment-controller,verb:PUT (22-Apr-2024 12:01:37.702) (total time: 560ms):
	Trace[616800339]: ["GuaranteedUpdate etcd3" audit-id:6eaf7e37-82a4-4c6a-9c7c-d0130551de8a,key:/deployments/kube-system/coredns,type:*apps.Deployment,resource:deployments.apps 559ms (12:01:37.702)
	Trace[616800339]:  ---"Txn call completed" 545ms (12:01:38.250)]
	Trace[616800339]: [560.330541ms] [560.330541ms] END
	I0422 12:01:38.267008       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0422 12:03:03.243635       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0422 12:03:03.244015       1 logging.go:59] [core] [Channel #12 SubChannel #14] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 12:03:03.244066       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 12:03:03.244100       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 12:03:03.257931       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 12:03:03.257994       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 12:03:03.258046       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 12:03:03.258112       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 12:03:03.258401       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 12:03:03.258451       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 12:03:03.258513       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 12:03:03.258897       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 12:03:03.258946       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 12:03:03.258997       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 12:03:03.259041       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 12:03:03.259079       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 12:03:03.259124       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [0813fabc9fa6ba17b6726b6d8301a1540805164f9f3a949ad51582a197097d2b] <==
	I0422 12:03:12.798318       1 serving.go:380] Generated self-signed cert in-memory
	I0422 12:03:13.378507       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0422 12:03:13.378574       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 12:03:13.380104       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0422 12:03:13.380343       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0422 12:03:13.380683       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0422 12:03:13.381275       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [cd5c3990237627a631e36507aee77d29ea52967df0b6e46e51b0e06cd73a8a2c] <==
	I0422 12:03:32.979528       1 shared_informer.go:320] Caches are synced for namespace
	I0422 12:03:32.981845       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0422 12:03:32.987347       1 shared_informer.go:320] Caches are synced for GC
	I0422 12:03:32.990247       1 shared_informer.go:320] Caches are synced for PVC protection
	I0422 12:03:32.991949       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0422 12:03:32.992089       1 shared_informer.go:320] Caches are synced for job
	I0422 12:03:32.994452       1 shared_informer.go:320] Caches are synced for attach detach
	I0422 12:03:32.994665       1 shared_informer.go:320] Caches are synced for ephemeral
	I0422 12:03:32.997934       1 shared_informer.go:320] Caches are synced for PV protection
	I0422 12:03:33.005534       1 shared_informer.go:320] Caches are synced for deployment
	I0422 12:03:33.006114       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0422 12:03:33.010294       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0422 12:03:33.014575       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0422 12:03:33.015000       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="95.007µs"
	I0422 12:03:33.015249       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0422 12:03:33.040727       1 shared_informer.go:320] Caches are synced for daemon sets
	I0422 12:03:33.049264       1 shared_informer.go:320] Caches are synced for stateful set
	I0422 12:03:33.063750       1 shared_informer.go:320] Caches are synced for HPA
	I0422 12:03:33.128836       1 shared_informer.go:320] Caches are synced for endpoint
	I0422 12:03:33.200399       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0422 12:03:33.209367       1 shared_informer.go:320] Caches are synced for resource quota
	I0422 12:03:33.235477       1 shared_informer.go:320] Caches are synced for resource quota
	I0422 12:03:33.637423       1 shared_informer.go:320] Caches are synced for garbage collector
	I0422 12:03:33.637583       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0422 12:03:33.644733       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [2fc8cafc02966b54e54268a2d4c374f58c975dadb9aed0080ac448d01833570a] <==
	I0422 12:03:21.669764       1 server_linux.go:69] "Using iptables proxy"
	I0422 12:03:21.696879       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.32"]
	I0422 12:03:21.803612       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 12:03:21.803740       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 12:03:21.803831       1 server_linux.go:165] "Using iptables Proxier"
	I0422 12:03:21.820585       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 12:03:21.820806       1 server.go:872] "Version info" version="v1.30.0"
	I0422 12:03:21.820823       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 12:03:21.824118       1 config.go:192] "Starting service config controller"
	I0422 12:03:21.824202       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 12:03:21.824257       1 config.go:101] "Starting endpoint slice config controller"
	I0422 12:03:21.824261       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 12:03:21.824697       1 config.go:319] "Starting node config controller"
	I0422 12:03:21.824703       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 12:03:21.925447       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0422 12:03:21.925617       1 shared_informer.go:320] Caches are synced for service config
	I0422 12:03:21.929300       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [eb1145b77344e2c000127bdd6c9597830f1d5a68c2e0f5c258f7b7976c489984] <==
	I0422 12:01:40.967087       1 server_linux.go:69] "Using iptables proxy"
	I0422 12:01:40.977042       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.32"]
	I0422 12:01:41.053934       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 12:01:41.054316       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 12:01:41.054516       1 server_linux.go:165] "Using iptables Proxier"
	I0422 12:01:41.071382       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 12:01:41.072376       1 server.go:872] "Version info" version="v1.30.0"
	I0422 12:01:41.072574       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 12:01:41.076428       1 config.go:101] "Starting endpoint slice config controller"
	I0422 12:01:41.076523       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 12:01:41.076643       1 config.go:192] "Starting service config controller"
	I0422 12:01:41.076718       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 12:01:41.093826       1 config.go:319] "Starting node config controller"
	I0422 12:01:41.093948       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 12:01:41.176858       1 shared_informer.go:320] Caches are synced for service config
	I0422 12:01:41.177101       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0422 12:01:41.194013       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [981d2d6d63e8de7ba29edb2287de8176003035a044b6405a584c6b96084a41a0] <==
	I0422 12:03:17.686013       1 serving.go:380] Generated self-signed cert in-memory
	W0422 12:03:20.225010       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0422 12:03:20.225092       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0422 12:03:20.225197       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0422 12:03:20.225229       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0422 12:03:20.264673       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0422 12:03:20.264764       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 12:03:20.266775       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0422 12:03:20.266979       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0422 12:03:20.267024       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0422 12:03:20.267071       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0422 12:03:20.367903       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [cf90c223253c36939ade0b1ead99909a2fa6f369ccb5853e200ec72b758cb4b0] <==
	I0422 12:03:13.427586       1 serving.go:380] Generated self-signed cert in-memory
	W0422 12:03:14.085333       1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://192.168.50.32:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.50.32:8443: connect: connection refused
	W0422 12:03:14.085437       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0422 12:03:14.085463       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0422 12:03:14.090721       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0422 12:03:14.090772       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 12:03:14.094695       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0422 12:03:14.094703       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0422 12:03:14.095026       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E0422 12:03:14.095121       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 22 12:03:16 pause-253908 kubelet[2903]: I0422 12:03:16.076508    2903 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ba4d5352a7dd71669287409cca46b471-flexvolume-dir\") pod \"kube-controller-manager-pause-253908\" (UID: \"ba4d5352a7dd71669287409cca46b471\") " pod="kube-system/kube-controller-manager-pause-253908"
	Apr 22 12:03:16 pause-253908 kubelet[2903]: I0422 12:03:16.076522    2903 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ba4d5352a7dd71669287409cca46b471-k8s-certs\") pod \"kube-controller-manager-pause-253908\" (UID: \"ba4d5352a7dd71669287409cca46b471\") " pod="kube-system/kube-controller-manager-pause-253908"
	Apr 22 12:03:16 pause-253908 kubelet[2903]: I0422 12:03:16.076549    2903 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3fb27f7952a7420d51aa5450257e91e7-kubeconfig\") pod \"kube-scheduler-pause-253908\" (UID: \"3fb27f7952a7420d51aa5450257e91e7\") " pod="kube-system/kube-scheduler-pause-253908"
	Apr 22 12:03:16 pause-253908 kubelet[2903]: I0422 12:03:16.178368    2903 kubelet_node_status.go:73] "Attempting to register node" node="pause-253908"
	Apr 22 12:03:16 pause-253908 kubelet[2903]: E0422 12:03:16.179429    2903 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.32:8443: connect: connection refused" node="pause-253908"
	Apr 22 12:03:16 pause-253908 kubelet[2903]: I0422 12:03:16.285490    2903 scope.go:117] "RemoveContainer" containerID="0813fabc9fa6ba17b6726b6d8301a1540805164f9f3a949ad51582a197097d2b"
	Apr 22 12:03:16 pause-253908 kubelet[2903]: I0422 12:03:16.287799    2903 scope.go:117] "RemoveContainer" containerID="3bed2d08de3cfa628fa16aea071722f4bb8fcc3457bcf689579e8ac0a6954ed8"
	Apr 22 12:03:16 pause-253908 kubelet[2903]: I0422 12:03:16.289254    2903 scope.go:117] "RemoveContainer" containerID="cf90c223253c36939ade0b1ead99909a2fa6f369ccb5853e200ec72b758cb4b0"
	Apr 22 12:03:16 pause-253908 kubelet[2903]: E0422 12:03:16.473033    2903 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-253908?timeout=10s\": dial tcp 192.168.50.32:8443: connect: connection refused" interval="800ms"
	Apr 22 12:03:16 pause-253908 kubelet[2903]: I0422 12:03:16.580800    2903 kubelet_node_status.go:73] "Attempting to register node" node="pause-253908"
	Apr 22 12:03:16 pause-253908 kubelet[2903]: E0422 12:03:16.582078    2903 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.32:8443: connect: connection refused" node="pause-253908"
	Apr 22 12:03:16 pause-253908 kubelet[2903]: W0422 12:03:16.937937    2903 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.32:8443: connect: connection refused
	Apr 22 12:03:16 pause-253908 kubelet[2903]: E0422 12:03:16.938022    2903 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.32:8443: connect: connection refused
	Apr 22 12:03:17 pause-253908 kubelet[2903]: I0422 12:03:17.384226    2903 kubelet_node_status.go:73] "Attempting to register node" node="pause-253908"
	Apr 22 12:03:20 pause-253908 kubelet[2903]: I0422 12:03:20.402490    2903 kubelet_node_status.go:112] "Node was previously registered" node="pause-253908"
	Apr 22 12:03:20 pause-253908 kubelet[2903]: I0422 12:03:20.403561    2903 kubelet_node_status.go:76] "Successfully registered node" node="pause-253908"
	Apr 22 12:03:20 pause-253908 kubelet[2903]: I0422 12:03:20.407488    2903 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 22 12:03:20 pause-253908 kubelet[2903]: I0422 12:03:20.409066    2903 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 22 12:03:20 pause-253908 kubelet[2903]: I0422 12:03:20.860464    2903 apiserver.go:52] "Watching apiserver"
	Apr 22 12:03:20 pause-253908 kubelet[2903]: I0422 12:03:20.864249    2903 topology_manager.go:215] "Topology Admit Handler" podUID="6ea190b9-5bd9-4674-a3b8-bdc5a570f968" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fzqm5"
	Apr 22 12:03:20 pause-253908 kubelet[2903]: I0422 12:03:20.864465    2903 topology_manager.go:215] "Topology Admit Handler" podUID="24d6aa75-1782-4786-89e8-922591a81986" podNamespace="kube-system" podName="kube-proxy-g6k67"
	Apr 22 12:03:20 pause-253908 kubelet[2903]: I0422 12:03:20.867581    2903 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 22 12:03:20 pause-253908 kubelet[2903]: I0422 12:03:20.910697    2903 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24d6aa75-1782-4786-89e8-922591a81986-xtables-lock\") pod \"kube-proxy-g6k67\" (UID: \"24d6aa75-1782-4786-89e8-922591a81986\") " pod="kube-system/kube-proxy-g6k67"
	Apr 22 12:03:20 pause-253908 kubelet[2903]: I0422 12:03:20.910858    2903 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24d6aa75-1782-4786-89e8-922591a81986-lib-modules\") pod \"kube-proxy-g6k67\" (UID: \"24d6aa75-1782-4786-89e8-922591a81986\") " pod="kube-system/kube-proxy-g6k67"
	Apr 22 12:03:26 pause-253908 kubelet[2903]: I0422 12:03:26.043343    2903 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-253908 -n pause-253908
helpers_test.go:261: (dbg) Run:  kubectl --context pause-253908 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-253908 -n pause-253908
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-253908 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-253908 logs -n 25: (1.688247922s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-230092 sudo                                | cilium-230092             | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-230092 sudo                                | cilium-230092             | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-230092 sudo cat                            | cilium-230092             | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-230092 sudo cat                            | cilium-230092             | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-230092 sudo                                | cilium-230092             | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-230092 sudo                                | cilium-230092             | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-230092 sudo                                | cilium-230092             | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-230092 sudo cat                            | cilium-230092             | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-230092 sudo cat                            | cilium-230092             | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-230092 sudo                                | cilium-230092             | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-230092 sudo                                | cilium-230092             | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-230092 sudo                                | cilium-230092             | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-230092 sudo find                           | cilium-230092             | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-230092 sudo crio                           | cilium-230092             | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-230092                                     | cilium-230092             | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC | 22 Apr 24 12:01 UTC |
	| start   | -p cert-expiration-454029                            | cert-expiration-454029    | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC | 22 Apr 24 12:02 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-262232                          | force-systemd-env-262232  | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC | 22 Apr 24 12:01 UTC |
	| start   | -p force-systemd-flag-905296                         | force-systemd-flag-905296 | jenkins | v1.33.0 | 22 Apr 24 12:01 UTC | 22 Apr 24 12:03 UTC |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-483459                               | NoKubernetes-483459       | jenkins | v1.33.0 | 22 Apr 24 12:02 UTC | 22 Apr 24 12:03 UTC |
	|         | --no-kubernetes --driver=kvm2                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p pause-253908                                      | pause-253908              | jenkins | v1.33.0 | 22 Apr 24 12:02 UTC | 22 Apr 24 12:03 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-483459                               | NoKubernetes-483459       | jenkins | v1.33.0 | 22 Apr 24 12:03 UTC | 22 Apr 24 12:03 UTC |
	| start   | -p NoKubernetes-483459                               | NoKubernetes-483459       | jenkins | v1.33.0 | 22 Apr 24 12:03 UTC |                     |
	|         | --no-kubernetes --driver=kvm2                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-905296 ssh cat                    | force-systemd-flag-905296 | jenkins | v1.33.0 | 22 Apr 24 12:03 UTC | 22 Apr 24 12:03 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                   |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-905296                         | force-systemd-flag-905296 | jenkins | v1.33.0 | 22 Apr 24 12:03 UTC | 22 Apr 24 12:03 UTC |
	| start   | -p running-upgrade-307156                            | minikube                  | jenkins | v1.26.0 | 22 Apr 24 12:03 UTC |                     |
	|         | --memory=2200 --vm-driver=kvm2                       |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 12:03:20
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.18.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 12:03:20.466357   57341 out.go:296] Setting OutFile to fd 1 ...
	I0422 12:03:20.466545   57341 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0422 12:03:20.466550   57341 out.go:309] Setting ErrFile to fd 2...
	I0422 12:03:20.466556   57341 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0422 12:03:20.467432   57341 root.go:329] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
	I0422 12:03:20.467983   57341 out.go:303] Setting JSON to false
	I0422 12:03:20.469338   57341 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6344,"bootTime":1713781057,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 12:03:20.469416   57341 start.go:125] virtualization: kvm guest
	I0422 12:03:20.472883   57341 out.go:177] * [running-upgrade-307156] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 12:03:20.474864   57341 out.go:177]   - MINIKUBE_LOCATION=18711
	I0422 12:03:20.474903   57341 notify.go:193] Checking for updates...
	I0422 12:03:20.477962   57341 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 12:03:20.479421   57341 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 12:03:20.480754   57341 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 12:03:20.482028   57341 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 12:03:20.483250   57341 out.go:177]   - KUBECONFIG=/tmp/legacy_kubeconfig3970519147
	I0422 12:03:20.486842   57341 config.go:178] Loaded profile config "NoKubernetes-483459": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0422 12:03:20.487336   57341 config.go:178] Loaded profile config "cert-expiration-454029": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 12:03:20.487562   57341 config.go:178] Loaded profile config "pause-253908": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 12:03:20.487727   57341 driver.go:360] Setting default libvirt URI to qemu:///system
	I0422 12:03:20.527345   57341 out.go:177] * Using the kvm2 driver based on user configuration
	I0422 12:03:20.529030   57341 start.go:284] selected driver: kvm2
	I0422 12:03:20.529042   57341 start.go:805] validating driver "kvm2" against <nil>
	I0422 12:03:20.529065   57341 start.go:816] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 12:03:20.530087   57341 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 12:03:20.530387   57341 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18711-7633/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 12:03:20.546781   57341 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 12:03:20.546870   57341 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0422 12:03:20.547145   57341 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0422 12:03:20.547183   57341 cni.go:95] Creating CNI manager for ""
	I0422 12:03:20.547192   57341 cni.go:165] "kvm2" driver + crio runtime found, recommending bridge
	I0422 12:03:20.547231   57341 start_flags.go:305] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0422 12:03:20.547240   57341 start_flags.go:310] config:
	{Name:running-upgrade-307156 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-307156 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0422 12:03:20.547360   57341 iso.go:128] acquiring lock: {Name:mk0c56e2dcd5f26df497ba87732f90c76c5965a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 12:03:20.550446   57341 out.go:177] * Downloading VM boot image ...
	I0422 12:03:20.551967   57341 download.go:101] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18711-7633/.minikube/cache/iso/amd64/minikube-v1.26.0-amd64.iso
	I0422 12:03:20.659308   57341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/.minikube/last_update_check: {Name:mk9b4930b7746fe1fe003dc6692c783af5901ed9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 12:03:20.661622   57341 out.go:177] * minikube 1.33.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.33.0
	I0422 12:03:20.663132   57341 out.go:177] * To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	I0422 12:03:17.214249   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | domain NoKubernetes-483459 has defined MAC address 52:54:00:70:98:b9 in network mk-NoKubernetes-483459
	I0422 12:03:17.214656   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | unable to find current IP address of domain NoKubernetes-483459 in network mk-NoKubernetes-483459
	I0422 12:03:17.214679   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | I0422 12:03:17.214615   57074 retry.go:31] will retry after 1.580522427s: waiting for machine to come up
	I0422 12:03:18.796306   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | domain NoKubernetes-483459 has defined MAC address 52:54:00:70:98:b9 in network mk-NoKubernetes-483459
	I0422 12:03:18.796916   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | unable to find current IP address of domain NoKubernetes-483459 in network mk-NoKubernetes-483459
	I0422 12:03:18.796928   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | I0422 12:03:18.796849   57074 retry.go:31] will retry after 1.850880239s: waiting for machine to come up
	I0422 12:03:20.649937   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | domain NoKubernetes-483459 has defined MAC address 52:54:00:70:98:b9 in network mk-NoKubernetes-483459
	I0422 12:03:20.650511   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | unable to find current IP address of domain NoKubernetes-483459 in network mk-NoKubernetes-483459
	I0422 12:03:20.650535   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | I0422 12:03:20.650457   57074 retry.go:31] will retry after 3.329779267s: waiting for machine to come up
	I0422 12:03:17.430723   56551 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8443/healthz ...
	I0422 12:03:20.205954   56551 api_server.go:279] https://192.168.50.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0422 12:03:20.205991   56551 api_server.go:103] status: https://192.168.50.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0422 12:03:20.206005   56551 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8443/healthz ...
	I0422 12:03:20.226043   56551 api_server.go:279] https://192.168.50.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0422 12:03:20.226071   56551 api_server.go:103] status: https://192.168.50.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0422 12:03:20.434505   56551 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8443/healthz ...
	I0422 12:03:20.441203   56551 api_server.go:279] https://192.168.50.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 12:03:20.441246   56551 api_server.go:103] status: https://192.168.50.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 12:03:20.930543   56551 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8443/healthz ...
	I0422 12:03:20.943204   56551 api_server.go:279] https://192.168.50.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 12:03:20.943237   56551 api_server.go:103] status: https://192.168.50.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 12:03:21.430566   56551 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8443/healthz ...
	I0422 12:03:21.442402   56551 api_server.go:279] https://192.168.50.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 12:03:21.442438   56551 api_server.go:103] status: https://192.168.50.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 12:03:21.930899   56551 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8443/healthz ...
	I0422 12:03:21.941408   56551 api_server.go:279] https://192.168.50.32:8443/healthz returned 200:
	ok
	I0422 12:03:21.960276   56551 api_server.go:141] control plane version: v1.30.0
	I0422 12:03:21.960316   56551 api_server.go:131] duration metric: took 5.029935217s to wait for apiserver health ...
	I0422 12:03:21.960329   56551 cni.go:84] Creating CNI manager for ""
	I0422 12:03:21.960340   56551 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 12:03:21.962657   56551 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 12:03:21.964298   56551 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 12:03:21.982072   56551 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 12:03:22.015149   56551 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 12:03:23.981972   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | domain NoKubernetes-483459 has defined MAC address 52:54:00:70:98:b9 in network mk-NoKubernetes-483459
	I0422 12:03:23.982416   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | unable to find current IP address of domain NoKubernetes-483459 in network mk-NoKubernetes-483459
	I0422 12:03:23.982438   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | I0422 12:03:23.982367   57074 retry.go:31] will retry after 2.834314172s: waiting for machine to come up
	I0422 12:03:22.038773   56551 system_pods.go:59] 6 kube-system pods found
	I0422 12:03:22.038812   56551 system_pods.go:61] "coredns-7db6d8ff4d-fzqm5" [6ea190b9-5bd9-4674-a3b8-bdc5a570f968] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 12:03:22.038822   56551 system_pods.go:61] "etcd-pause-253908" [e91f3f14-3334-4ab6-af81-8c11d6532aba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0422 12:03:22.038837   56551 system_pods.go:61] "kube-apiserver-pause-253908" [89ac120e-0b2e-44dc-86a1-abd298f25ce4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0422 12:03:22.038846   56551 system_pods.go:61] "kube-controller-manager-pause-253908" [5ffc7275-ee98-4872-8b9e-5c572501f969] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0422 12:03:22.038858   56551 system_pods.go:61] "kube-proxy-g6k67" [24d6aa75-1782-4786-89e8-922591a81986] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0422 12:03:22.038866   56551 system_pods.go:61] "kube-scheduler-pause-253908" [8c569828-617e-45d0-86f1-520ccb84fc47] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0422 12:03:22.038875   56551 system_pods.go:74] duration metric: took 23.697976ms to wait for pod list to return data ...
	I0422 12:03:22.038893   56551 node_conditions.go:102] verifying NodePressure condition ...
	I0422 12:03:22.042947   56551 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 12:03:22.042972   56551 node_conditions.go:123] node cpu capacity is 2
	I0422 12:03:22.042983   56551 node_conditions.go:105] duration metric: took 4.085244ms to run NodePressure ...
	I0422 12:03:22.043002   56551 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 12:03:22.422034   56551 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0422 12:03:22.428252   56551 kubeadm.go:733] kubelet initialised
	I0422 12:03:22.428276   56551 kubeadm.go:734] duration metric: took 6.214177ms waiting for restarted kubelet to initialise ...
	I0422 12:03:22.428287   56551 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 12:03:22.434050   56551 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-fzqm5" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:24.441665   56551 pod_ready.go:102] pod "coredns-7db6d8ff4d-fzqm5" in "kube-system" namespace has status "Ready":"False"
	I0422 12:03:26.442684   56551 pod_ready.go:92] pod "coredns-7db6d8ff4d-fzqm5" in "kube-system" namespace has status "Ready":"True"
	I0422 12:03:26.442709   56551 pod_ready.go:81] duration metric: took 4.008631391s for pod "coredns-7db6d8ff4d-fzqm5" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:26.442723   56551 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-253908" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:29.215500   57341 out.go:177] * Starting control plane node running-upgrade-307156 in cluster running-upgrade-307156
	I0422 12:03:29.217247   57341 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0422 12:03:29.326892   57341 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.1/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0422 12:03:29.326915   57341 cache.go:57] Caching tarball of preloaded images
	I0422 12:03:29.327164   57341 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0422 12:03:29.329206   57341 out.go:177] * Downloading Kubernetes v1.24.1 preload ...
	I0422 12:03:29.330573   57341 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 ...
	I0422 12:03:29.446981   57341 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.1/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:4c8ad2429eafc79a0e5a20bdf41ae0bc -> /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0422 12:03:26.818092   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | domain NoKubernetes-483459 has defined MAC address 52:54:00:70:98:b9 in network mk-NoKubernetes-483459
	I0422 12:03:26.818652   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | unable to find current IP address of domain NoKubernetes-483459 in network mk-NoKubernetes-483459
	I0422 12:03:26.818676   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | I0422 12:03:26.818608   57074 retry.go:31] will retry after 3.603059437s: waiting for machine to come up
	I0422 12:03:30.422901   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | domain NoKubernetes-483459 has defined MAC address 52:54:00:70:98:b9 in network mk-NoKubernetes-483459
	I0422 12:03:30.423437   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | unable to find current IP address of domain NoKubernetes-483459 in network mk-NoKubernetes-483459
	I0422 12:03:30.423460   57034 main.go:141] libmachine: (NoKubernetes-483459) DBG | I0422 12:03:30.423387   57074 retry.go:31] will retry after 6.23834071s: waiting for machine to come up
	I0422 12:03:27.950996   56551 pod_ready.go:92] pod "etcd-pause-253908" in "kube-system" namespace has status "Ready":"True"
	I0422 12:03:27.951024   56551 pod_ready.go:81] duration metric: took 1.508291694s for pod "etcd-pause-253908" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:27.951038   56551 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-253908" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:27.956527   56551 pod_ready.go:92] pod "kube-apiserver-pause-253908" in "kube-system" namespace has status "Ready":"True"
	I0422 12:03:27.956550   56551 pod_ready.go:81] duration metric: took 5.502667ms for pod "kube-apiserver-pause-253908" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:27.956561   56551 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-253908" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:27.963296   56551 pod_ready.go:92] pod "kube-controller-manager-pause-253908" in "kube-system" namespace has status "Ready":"True"
	I0422 12:03:27.963323   56551 pod_ready.go:81] duration metric: took 6.752295ms for pod "kube-controller-manager-pause-253908" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:27.963337   56551 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-g6k67" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:27.971973   56551 pod_ready.go:92] pod "kube-proxy-g6k67" in "kube-system" namespace has status "Ready":"True"
	I0422 12:03:27.971995   56551 pod_ready.go:81] duration metric: took 8.651011ms for pod "kube-proxy-g6k67" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:27.972004   56551 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-253908" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:29.978838   56551 pod_ready.go:102] pod "kube-scheduler-pause-253908" in "kube-system" namespace has status "Ready":"False"
	I0422 12:03:31.980267   56551 pod_ready.go:92] pod "kube-scheduler-pause-253908" in "kube-system" namespace has status "Ready":"True"
	I0422 12:03:31.980289   56551 pod_ready.go:81] duration metric: took 4.008279691s for pod "kube-scheduler-pause-253908" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:31.980297   56551 pod_ready.go:38] duration metric: took 9.551999261s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 12:03:31.980320   56551 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 12:03:31.995218   56551 ops.go:34] apiserver oom_adj: -16
	I0422 12:03:31.995240   56551 kubeadm.go:591] duration metric: took 18.568866736s to restartPrimaryControlPlane
	I0422 12:03:31.995251   56551 kubeadm.go:393] duration metric: took 18.744771729s to StartCluster
	I0422 12:03:31.995278   56551 settings.go:142] acquiring lock: {Name:mkd680667f0df4166491741d55b55ac111bb0138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 12:03:31.995356   56551 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18711-7633/kubeconfig
	I0422 12:03:31.996637   56551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18711-7633/kubeconfig: {Name:mkee6de4c6906fe5621e8aeac858a93219648db5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 12:03:31.996926   56551 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 12:03:31.998948   56551 out.go:177] * Verifying Kubernetes components...
	I0422 12:03:31.996993   56551 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0422 12:03:31.997281   56551 config.go:182] Loaded profile config "pause-253908": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 12:03:32.000506   56551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 12:03:32.002385   56551 out.go:177] * Enabled addons: 
	I0422 12:03:32.003934   56551 addons.go:505] duration metric: took 6.957838ms for enable addons: enabled=[]
	I0422 12:03:32.179183   56551 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 12:03:32.204051   56551 node_ready.go:35] waiting up to 6m0s for node "pause-253908" to be "Ready" ...
	I0422 12:03:32.207198   56551 node_ready.go:49] node "pause-253908" has status "Ready":"True"
	I0422 12:03:32.207217   56551 node_ready.go:38] duration metric: took 3.132304ms for node "pause-253908" to be "Ready" ...
	I0422 12:03:32.207227   56551 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 12:03:32.213177   56551 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fzqm5" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:32.218404   56551 pod_ready.go:92] pod "coredns-7db6d8ff4d-fzqm5" in "kube-system" namespace has status "Ready":"True"
	I0422 12:03:32.218423   56551 pod_ready.go:81] duration metric: took 5.22158ms for pod "coredns-7db6d8ff4d-fzqm5" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:32.218431   56551 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-253908" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:32.440108   56551 pod_ready.go:92] pod "etcd-pause-253908" in "kube-system" namespace has status "Ready":"True"
	I0422 12:03:32.440131   56551 pod_ready.go:81] duration metric: took 221.694706ms for pod "etcd-pause-253908" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:32.440141   56551 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-253908" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:32.839827   56551 pod_ready.go:92] pod "kube-apiserver-pause-253908" in "kube-system" namespace has status "Ready":"True"
	I0422 12:03:32.839852   56551 pod_ready.go:81] duration metric: took 399.704925ms for pod "kube-apiserver-pause-253908" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:32.839862   56551 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-253908" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:33.241979   56551 pod_ready.go:92] pod "kube-controller-manager-pause-253908" in "kube-system" namespace has status "Ready":"True"
	I0422 12:03:33.242012   56551 pod_ready.go:81] duration metric: took 402.142448ms for pod "kube-controller-manager-pause-253908" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:33.242027   56551 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g6k67" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:33.641645   56551 pod_ready.go:92] pod "kube-proxy-g6k67" in "kube-system" namespace has status "Ready":"True"
	I0422 12:03:33.641680   56551 pod_ready.go:81] duration metric: took 399.644678ms for pod "kube-proxy-g6k67" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:33.641696   56551 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-253908" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:34.039339   56551 pod_ready.go:92] pod "kube-scheduler-pause-253908" in "kube-system" namespace has status "Ready":"True"
	I0422 12:03:34.039366   56551 pod_ready.go:81] duration metric: took 397.661927ms for pod "kube-scheduler-pause-253908" in "kube-system" namespace to be "Ready" ...
	I0422 12:03:34.039375   56551 pod_ready.go:38] duration metric: took 1.832136963s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 12:03:34.039392   56551 api_server.go:52] waiting for apiserver process to appear ...
	I0422 12:03:34.039455   56551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 12:03:34.059295   56551 api_server.go:72] duration metric: took 2.06233036s to wait for apiserver process to appear ...
	I0422 12:03:34.059323   56551 api_server.go:88] waiting for apiserver healthz status ...
	I0422 12:03:34.059345   56551 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8443/healthz ...
	I0422 12:03:34.065066   56551 api_server.go:279] https://192.168.50.32:8443/healthz returned 200:
	ok
	I0422 12:03:34.066126   56551 api_server.go:141] control plane version: v1.30.0
	I0422 12:03:34.066146   56551 api_server.go:131] duration metric: took 6.817333ms to wait for apiserver health ...
	I0422 12:03:34.066154   56551 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 12:03:34.242544   56551 system_pods.go:59] 6 kube-system pods found
	I0422 12:03:34.242572   56551 system_pods.go:61] "coredns-7db6d8ff4d-fzqm5" [6ea190b9-5bd9-4674-a3b8-bdc5a570f968] Running
	I0422 12:03:34.242576   56551 system_pods.go:61] "etcd-pause-253908" [e91f3f14-3334-4ab6-af81-8c11d6532aba] Running
	I0422 12:03:34.242585   56551 system_pods.go:61] "kube-apiserver-pause-253908" [89ac120e-0b2e-44dc-86a1-abd298f25ce4] Running
	I0422 12:03:34.242589   56551 system_pods.go:61] "kube-controller-manager-pause-253908" [5ffc7275-ee98-4872-8b9e-5c572501f969] Running
	I0422 12:03:34.242592   56551 system_pods.go:61] "kube-proxy-g6k67" [24d6aa75-1782-4786-89e8-922591a81986] Running
	I0422 12:03:34.242595   56551 system_pods.go:61] "kube-scheduler-pause-253908" [8c569828-617e-45d0-86f1-520ccb84fc47] Running
	I0422 12:03:34.242600   56551 system_pods.go:74] duration metric: took 176.442257ms to wait for pod list to return data ...
	I0422 12:03:34.242608   56551 default_sa.go:34] waiting for default service account to be created ...
	I0422 12:03:34.439760   56551 default_sa.go:45] found service account: "default"
	I0422 12:03:34.439786   56551 default_sa.go:55] duration metric: took 197.172392ms for default service account to be created ...
	I0422 12:03:34.439795   56551 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 12:03:34.642739   56551 system_pods.go:86] 6 kube-system pods found
	I0422 12:03:34.642797   56551 system_pods.go:89] "coredns-7db6d8ff4d-fzqm5" [6ea190b9-5bd9-4674-a3b8-bdc5a570f968] Running
	I0422 12:03:34.642805   56551 system_pods.go:89] "etcd-pause-253908" [e91f3f14-3334-4ab6-af81-8c11d6532aba] Running
	I0422 12:03:34.642813   56551 system_pods.go:89] "kube-apiserver-pause-253908" [89ac120e-0b2e-44dc-86a1-abd298f25ce4] Running
	I0422 12:03:34.642820   56551 system_pods.go:89] "kube-controller-manager-pause-253908" [5ffc7275-ee98-4872-8b9e-5c572501f969] Running
	I0422 12:03:34.642826   56551 system_pods.go:89] "kube-proxy-g6k67" [24d6aa75-1782-4786-89e8-922591a81986] Running
	I0422 12:03:34.642837   56551 system_pods.go:89] "kube-scheduler-pause-253908" [8c569828-617e-45d0-86f1-520ccb84fc47] Running
	I0422 12:03:34.642852   56551 system_pods.go:126] duration metric: took 203.050874ms to wait for k8s-apps to be running ...
	I0422 12:03:34.642861   56551 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 12:03:34.642923   56551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 12:03:34.664107   56551 system_svc.go:56] duration metric: took 21.23486ms WaitForService to wait for kubelet
	I0422 12:03:34.664137   56551 kubeadm.go:576] duration metric: took 2.667178463s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 12:03:34.664165   56551 node_conditions.go:102] verifying NodePressure condition ...
	I0422 12:03:34.839687   56551 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 12:03:34.839717   56551 node_conditions.go:123] node cpu capacity is 2
	I0422 12:03:34.839731   56551 node_conditions.go:105] duration metric: took 175.559846ms to run NodePressure ...
	I0422 12:03:34.839746   56551 start.go:240] waiting for startup goroutines ...
	I0422 12:03:34.839755   56551 start.go:245] waiting for cluster config update ...
	I0422 12:03:34.839766   56551 start.go:254] writing updated cluster config ...
	I0422 12:03:34.840139   56551 ssh_runner.go:195] Run: rm -f paused
	I0422 12:03:34.895958   56551 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0422 12:03:34.898239   56551 out.go:177] * Done! kubectl is now configured to use "pause-253908" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 22 12:03:38 pause-253908 crio[2373]: time="2024-04-22 12:03:38.056106471Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8597d6c7-6084-4e28-8321-2c05b66a797c name=/runtime.v1.RuntimeService/Version
	Apr 22 12:03:38 pause-253908 crio[2373]: time="2024-04-22 12:03:38.057490677Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e661f000-0d46-4969-8faf-065ae9152a59 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 12:03:38 pause-253908 crio[2373]: time="2024-04-22 12:03:38.058059106Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713787418057975069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e661f000-0d46-4969-8faf-065ae9152a59 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 12:03:38 pause-253908 crio[2373]: time="2024-04-22 12:03:38.058865230Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=457ed8d5-5e2a-40ed-934a-82a59513c8ee name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:03:38 pause-253908 crio[2373]: time="2024-04-22 12:03:38.058949572Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=457ed8d5-5e2a-40ed-934a-82a59513c8ee name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:03:38 pause-253908 crio[2373]: time="2024-04-22 12:03:38.059280038Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:523eda3c2ab8f80eff05dd7d1e88881e2a78abe174394518a282c0dd4a1e820d,PodSandboxId:49678c14a817da2971dad324e45aa10c38e46bbed10391da1639e621e48abad2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713787401752579561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzqm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea190b9-5bd9-4674-a3b8-bdc5a570f968,},Annotations:map[string]string{io.kubernetes.container.hash: 291d2b05,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fc8cafc02966b54e54268a2d4c374f58c975dadb9aed0080ac448d01833570a,PodSandboxId:dfbc0ab2d6067610a97488007717cf658c6ce0ad89b8d174639b6f5b3b891ef9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713787401403510512,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g6k67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 24d6aa75-1782-4786-89e8-922591a81986,},Annotations:map[string]string{io.kubernetes.container.hash: d69e02e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e899a93d79f4f434c3090ae4f521cf4a77cf11d0d86c0fc39f8a3a859de477b,PodSandboxId:a273e2e2eeee48c1a08864beb2b6c9021fd0eb8c9da5226b4d20909fe4a57910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713787396647008145,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5ad272a14
93e5b272b1abb8c5c83078,},Annotations:map[string]string{io.kubernetes.container.hash: 1a2991e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981d2d6d63e8de7ba29edb2287de8176003035a044b6405a584c6b96084a41a0,PodSandboxId:870b9c8e48c332386ae5c1672618e11f719ffa35209e4b70222f58b9a34c9deb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713787396342621524,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb27f7952a7420d51aa5450257e
91e7,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3b6146205338e6a1e1d50c3496b68d7eba4009f113c0391f1e3aba0855932b0,PodSandboxId:45562200369ac50ce5c6b2e9cacc0d3881fa654c548a63e20616946b204e4f5e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713787396329504198,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a779bf320e79fd7a238207a9693e7ba,},Annotations:map[string]string{io.kubernet
es.container.hash: 5d9fc288,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd5c3990237627a631e36507aee77d29ea52967df0b6e46e51b0e06cd73a8a2c,PodSandboxId:5cd0ea90f87a902e27e894187b2d4f378a295fffe68a08eff1856b239e136445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713787396308812291,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba4d5352a7dd71669287409cca46b471,},Annotations:map[string]string{io
.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf90c223253c36939ade0b1ead99909a2fa6f369ccb5853e200ec72b758cb4b0,PodSandboxId:870b9c8e48c332386ae5c1672618e11f719ffa35209e4b70222f58b9a34c9deb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713787391959825155,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb27f7952a7420d51aa5450257e91e7,},Annotations:map[string]string{io.kubernetes.contain
er.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bed2d08de3cfa628fa16aea071722f4bb8fcc3457bcf689579e8ac0a6954ed8,PodSandboxId:45562200369ac50ce5c6b2e9cacc0d3881fa654c548a63e20616946b204e4f5e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713787391866967994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a779bf320e79fd7a238207a9693e7ba,},Annotations:map[string]string{io.kubernetes.container.hash: 5d9fc288,io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0813fabc9fa6ba17b6726b6d8301a1540805164f9f3a949ad51582a197097d2b,PodSandboxId:5cd0ea90f87a902e27e894187b2d4f378a295fffe68a08eff1856b239e136445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713787391836025181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba4d5352a7dd71669287409cca46b471,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1145b77344e2c000127bdd6c9597830f1d5a68c2e0f5c258f7b7976c489984,PodSandboxId:4560018c43edb5052a5cb74cd84ec02052ecace3b29f58d5387c92c69c427e8a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713787300781456327,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g6k67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d6aa75-1782-4786-89e8-922591a81986,},Annotations:map[string]string{io.kubernetes.container.hash: d69e02e5,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f018015c8868103b2edcf518d46e9d2d743f80366180fd28df34e13a3f133e66,PodSandboxId:0d4df0d32e1c8a6e049ecbf06bedcc9be3aefee9a99a2ce06ebffbfecc637632,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713787300567216896,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzqm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea190b9-5bd9-4674-a3b8-bdc5a570f968,},Annotations:map[string]string{io.kubernetes.container.hash: 291d2b05,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c85b0b4fb2ae515dd9ca5c77f4209d6cb8984b16b4c7abe738e8a5136d778ab9,PodSandboxId:515fd181b7d351a4c5a648d779142c910ebf0f7ba310412789e876c5062e14e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713787277096422946,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: e5ad272a1493e5b272b1abb8c5c83078,},Annotations:map[string]string{io.kubernetes.container.hash: 1a2991e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=457ed8d5-5e2a-40ed-934a-82a59513c8ee name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:03:38 pause-253908 crio[2373]: time="2024-04-22 12:03:38.114977825Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf88f37d-aa0c-44c9-8e3c-58a44ff87d82 name=/runtime.v1.RuntimeService/Version
	Apr 22 12:03:38 pause-253908 crio[2373]: time="2024-04-22 12:03:38.115117151Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf88f37d-aa0c-44c9-8e3c-58a44ff87d82 name=/runtime.v1.RuntimeService/Version
	Apr 22 12:03:38 pause-253908 crio[2373]: time="2024-04-22 12:03:38.119995863Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e0aeb90-5066-4e7e-affe-7f6e675a6ba1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 12:03:38 pause-253908 crio[2373]: time="2024-04-22 12:03:38.120659569Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713787418120623957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e0aeb90-5066-4e7e-affe-7f6e675a6ba1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 12:03:38 pause-253908 crio[2373]: time="2024-04-22 12:03:38.121576604Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=292ec655-4394-4f7e-af21-83899bcc862c name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:03:38 pause-253908 crio[2373]: time="2024-04-22 12:03:38.121651857Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=292ec655-4394-4f7e-af21-83899bcc862c name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:03:38 pause-253908 crio[2373]: time="2024-04-22 12:03:38.121893845Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:523eda3c2ab8f80eff05dd7d1e88881e2a78abe174394518a282c0dd4a1e820d,PodSandboxId:49678c14a817da2971dad324e45aa10c38e46bbed10391da1639e621e48abad2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713787401752579561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzqm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea190b9-5bd9-4674-a3b8-bdc5a570f968,},Annotations:map[string]string{io.kubernetes.container.hash: 291d2b05,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fc8cafc02966b54e54268a2d4c374f58c975dadb9aed0080ac448d01833570a,PodSandboxId:dfbc0ab2d6067610a97488007717cf658c6ce0ad89b8d174639b6f5b3b891ef9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713787401403510512,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g6k67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 24d6aa75-1782-4786-89e8-922591a81986,},Annotations:map[string]string{io.kubernetes.container.hash: d69e02e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e899a93d79f4f434c3090ae4f521cf4a77cf11d0d86c0fc39f8a3a859de477b,PodSandboxId:a273e2e2eeee48c1a08864beb2b6c9021fd0eb8c9da5226b4d20909fe4a57910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713787396647008145,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5ad272a14
93e5b272b1abb8c5c83078,},Annotations:map[string]string{io.kubernetes.container.hash: 1a2991e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981d2d6d63e8de7ba29edb2287de8176003035a044b6405a584c6b96084a41a0,PodSandboxId:870b9c8e48c332386ae5c1672618e11f719ffa35209e4b70222f58b9a34c9deb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713787396342621524,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb27f7952a7420d51aa5450257e
91e7,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3b6146205338e6a1e1d50c3496b68d7eba4009f113c0391f1e3aba0855932b0,PodSandboxId:45562200369ac50ce5c6b2e9cacc0d3881fa654c548a63e20616946b204e4f5e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713787396329504198,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a779bf320e79fd7a238207a9693e7ba,},Annotations:map[string]string{io.kubernet
es.container.hash: 5d9fc288,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd5c3990237627a631e36507aee77d29ea52967df0b6e46e51b0e06cd73a8a2c,PodSandboxId:5cd0ea90f87a902e27e894187b2d4f378a295fffe68a08eff1856b239e136445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713787396308812291,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba4d5352a7dd71669287409cca46b471,},Annotations:map[string]string{io
.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf90c223253c36939ade0b1ead99909a2fa6f369ccb5853e200ec72b758cb4b0,PodSandboxId:870b9c8e48c332386ae5c1672618e11f719ffa35209e4b70222f58b9a34c9deb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713787391959825155,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb27f7952a7420d51aa5450257e91e7,},Annotations:map[string]string{io.kubernetes.contain
er.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bed2d08de3cfa628fa16aea071722f4bb8fcc3457bcf689579e8ac0a6954ed8,PodSandboxId:45562200369ac50ce5c6b2e9cacc0d3881fa654c548a63e20616946b204e4f5e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713787391866967994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a779bf320e79fd7a238207a9693e7ba,},Annotations:map[string]string{io.kubernetes.container.hash: 5d9fc288,io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0813fabc9fa6ba17b6726b6d8301a1540805164f9f3a949ad51582a197097d2b,PodSandboxId:5cd0ea90f87a902e27e894187b2d4f378a295fffe68a08eff1856b239e136445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713787391836025181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba4d5352a7dd71669287409cca46b471,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1145b77344e2c000127bdd6c9597830f1d5a68c2e0f5c258f7b7976c489984,PodSandboxId:4560018c43edb5052a5cb74cd84ec02052ecace3b29f58d5387c92c69c427e8a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713787300781456327,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g6k67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d6aa75-1782-4786-89e8-922591a81986,},Annotations:map[string]string{io.kubernetes.container.hash: d69e02e5,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f018015c8868103b2edcf518d46e9d2d743f80366180fd28df34e13a3f133e66,PodSandboxId:0d4df0d32e1c8a6e049ecbf06bedcc9be3aefee9a99a2ce06ebffbfecc637632,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713787300567216896,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzqm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea190b9-5bd9-4674-a3b8-bdc5a570f968,},Annotations:map[string]string{io.kubernetes.container.hash: 291d2b05,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c85b0b4fb2ae515dd9ca5c77f4209d6cb8984b16b4c7abe738e8a5136d778ab9,PodSandboxId:515fd181b7d351a4c5a648d779142c910ebf0f7ba310412789e876c5062e14e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713787277096422946,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: e5ad272a1493e5b272b1abb8c5c83078,},Annotations:map[string]string{io.kubernetes.container.hash: 1a2991e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=292ec655-4394-4f7e-af21-83899bcc862c name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:03:38 pause-253908 crio[2373]: time="2024-04-22 12:03:38.124467636Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=c3f662ef-6bc9-492d-90a7-f6a104c49ba0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 22 12:03:38 pause-253908 crio[2373]: time="2024-04-22 12:03:38.124752049Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:49678c14a817da2971dad324e45aa10c38e46bbed10391da1639e621e48abad2,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-fzqm5,Uid:6ea190b9-5bd9-4674-a3b8-bdc5a570f968,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713787401292051269,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzqm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea190b9-5bd9-4674-a3b8-bdc5a570f968,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T12:03:20.863853280Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dfbc0ab2d6067610a97488007717cf658c6ce0ad89b8d174639b6f5b3b891ef9,Metadata:&PodSandboxMetadata{Name:kube-proxy-g6k67,Uid:24d6aa75-1782-4786-89e8-922591a81986,Namespace:kube-system,Attempt
:1,},State:SANDBOX_READY,CreatedAt:1713787401174588742,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-g6k67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d6aa75-1782-4786-89e8-922591a81986,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T12:03:20.863862759Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a273e2e2eeee48c1a08864beb2b6c9021fd0eb8c9da5226b4d20909fe4a57910,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-253908,Uid:e5ad272a1493e5b272b1abb8c5c83078,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713787396359669511,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5ad272a1493e5b272b1abb8c5c83078,tier: control-plane,},Annotations:map[string
]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.32:8443,kubernetes.io/config.hash: e5ad272a1493e5b272b1abb8c5c83078,kubernetes.io/config.seen: 2024-04-22T12:03:15.854555294Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:45562200369ac50ce5c6b2e9cacc0d3881fa654c548a63e20616946b204e4f5e,Metadata:&PodSandboxMetadata{Name:etcd-pause-253908,Uid:4a779bf320e79fd7a238207a9693e7ba,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1713787391587718260,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a779bf320e79fd7a238207a9693e7ba,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.32:2379,kubernetes.io/config.hash: 4a779bf320e79fd7a238207a9693e7ba,kubernetes.io/config.seen: 2024-04-22T12:01:22.871334920Z,kubernetes.io/config.source: file,},RuntimeHan
dler:,},&PodSandbox{Id:870b9c8e48c332386ae5c1672618e11f719ffa35209e4b70222f58b9a34c9deb,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-253908,Uid:3fb27f7952a7420d51aa5450257e91e7,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1713787391577005293,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb27f7952a7420d51aa5450257e91e7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3fb27f7952a7420d51aa5450257e91e7,kubernetes.io/config.seen: 2024-04-22T12:01:22.871344579Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5cd0ea90f87a902e27e894187b2d4f378a295fffe68a08eff1856b239e136445,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-253908,Uid:ba4d5352a7dd71669287409cca46b471,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1713787391560760471,Labels:map[string]strin
g{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba4d5352a7dd71669287409cca46b471,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ba4d5352a7dd71669287409cca46b471,kubernetes.io/config.seen: 2024-04-22T12:01:22.871342967Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4560018c43edb5052a5cb74cd84ec02052ecace3b29f58d5387c92c69c427e8a,Metadata:&PodSandboxMetadata{Name:kube-proxy-g6k67,Uid:24d6aa75-1782-4786-89e8-922591a81986,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713787300624284000,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-g6k67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d6aa75-1782-4786-89e8-922591a81986,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]st
ring{kubernetes.io/config.seen: 2024-04-22T12:01:38.509232288Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0d4df0d32e1c8a6e049ecbf06bedcc9be3aefee9a99a2ce06ebffbfecc637632,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-fzqm5,Uid:6ea190b9-5bd9-4674-a3b8-bdc5a570f968,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713787300257267196,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzqm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea190b9-5bd9-4674-a3b8-bdc5a570f968,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T12:01:38.441439829Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:515fd181b7d351a4c5a648d779142c910ebf0f7ba310412789e876c5062e14e9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-253908,Uid:e5ad272a1493e5b272b1abb8c5c83078,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,Creat
edAt:1713787276807951185,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5ad272a1493e5b272b1abb8c5c83078,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.32:8443,kubernetes.io/config.hash: e5ad272a1493e5b272b1abb8c5c83078,kubernetes.io/config.seen: 2024-04-22T12:01:16.356099514Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=c3f662ef-6bc9-492d-90a7-f6a104c49ba0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 22 12:03:38 pause-253908 crio[2373]: time="2024-04-22 12:03:38.126743220Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3217d18c-4943-4f88-8d92-ef9e8f710e7a name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:03:38 pause-253908 crio[2373]: time="2024-04-22 12:03:38.126823382Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3217d18c-4943-4f88-8d92-ef9e8f710e7a name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:03:38 pause-253908 crio[2373]: time="2024-04-22 12:03:38.127312646Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:523eda3c2ab8f80eff05dd7d1e88881e2a78abe174394518a282c0dd4a1e820d,PodSandboxId:49678c14a817da2971dad324e45aa10c38e46bbed10391da1639e621e48abad2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713787401752579561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzqm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea190b9-5bd9-4674-a3b8-bdc5a570f968,},Annotations:map[string]string{io.kubernetes.container.hash: 291d2b05,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fc8cafc02966b54e54268a2d4c374f58c975dadb9aed0080ac448d01833570a,PodSandboxId:dfbc0ab2d6067610a97488007717cf658c6ce0ad89b8d174639b6f5b3b891ef9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713787401403510512,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g6k67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 24d6aa75-1782-4786-89e8-922591a81986,},Annotations:map[string]string{io.kubernetes.container.hash: d69e02e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e899a93d79f4f434c3090ae4f521cf4a77cf11d0d86c0fc39f8a3a859de477b,PodSandboxId:a273e2e2eeee48c1a08864beb2b6c9021fd0eb8c9da5226b4d20909fe4a57910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713787396647008145,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5ad272a14
93e5b272b1abb8c5c83078,},Annotations:map[string]string{io.kubernetes.container.hash: 1a2991e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981d2d6d63e8de7ba29edb2287de8176003035a044b6405a584c6b96084a41a0,PodSandboxId:870b9c8e48c332386ae5c1672618e11f719ffa35209e4b70222f58b9a34c9deb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713787396342621524,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb27f7952a7420d51aa5450257e
91e7,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3b6146205338e6a1e1d50c3496b68d7eba4009f113c0391f1e3aba0855932b0,PodSandboxId:45562200369ac50ce5c6b2e9cacc0d3881fa654c548a63e20616946b204e4f5e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713787396329504198,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a779bf320e79fd7a238207a9693e7ba,},Annotations:map[string]string{io.kubernet
es.container.hash: 5d9fc288,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd5c3990237627a631e36507aee77d29ea52967df0b6e46e51b0e06cd73a8a2c,PodSandboxId:5cd0ea90f87a902e27e894187b2d4f378a295fffe68a08eff1856b239e136445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713787396308812291,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba4d5352a7dd71669287409cca46b471,},Annotations:map[string]string{io
.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf90c223253c36939ade0b1ead99909a2fa6f369ccb5853e200ec72b758cb4b0,PodSandboxId:870b9c8e48c332386ae5c1672618e11f719ffa35209e4b70222f58b9a34c9deb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713787391959825155,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb27f7952a7420d51aa5450257e91e7,},Annotations:map[string]string{io.kubernetes.contain
er.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bed2d08de3cfa628fa16aea071722f4bb8fcc3457bcf689579e8ac0a6954ed8,PodSandboxId:45562200369ac50ce5c6b2e9cacc0d3881fa654c548a63e20616946b204e4f5e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713787391866967994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a779bf320e79fd7a238207a9693e7ba,},Annotations:map[string]string{io.kubernetes.container.hash: 5d9fc288,io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0813fabc9fa6ba17b6726b6d8301a1540805164f9f3a949ad51582a197097d2b,PodSandboxId:5cd0ea90f87a902e27e894187b2d4f378a295fffe68a08eff1856b239e136445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713787391836025181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba4d5352a7dd71669287409cca46b471,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1145b77344e2c000127bdd6c9597830f1d5a68c2e0f5c258f7b7976c489984,PodSandboxId:4560018c43edb5052a5cb74cd84ec02052ecace3b29f58d5387c92c69c427e8a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713787300781456327,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g6k67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d6aa75-1782-4786-89e8-922591a81986,},Annotations:map[string]string{io.kubernetes.container.hash: d69e02e5,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f018015c8868103b2edcf518d46e9d2d743f80366180fd28df34e13a3f133e66,PodSandboxId:0d4df0d32e1c8a6e049ecbf06bedcc9be3aefee9a99a2ce06ebffbfecc637632,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713787300567216896,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzqm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea190b9-5bd9-4674-a3b8-bdc5a570f968,},Annotations:map[string]string{io.kubernetes.container.hash: 291d2b05,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c85b0b4fb2ae515dd9ca5c77f4209d6cb8984b16b4c7abe738e8a5136d778ab9,PodSandboxId:515fd181b7d351a4c5a648d779142c910ebf0f7ba310412789e876c5062e14e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713787277096422946,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: e5ad272a1493e5b272b1abb8c5c83078,},Annotations:map[string]string{io.kubernetes.container.hash: 1a2991e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3217d18c-4943-4f88-8d92-ef9e8f710e7a name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:03:38 pause-253908 crio[2373]: time="2024-04-22 12:03:38.178730892Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1639bbdc-2aaa-4968-ad29-3e22153a6f98 name=/runtime.v1.RuntimeService/Version
	Apr 22 12:03:38 pause-253908 crio[2373]: time="2024-04-22 12:03:38.178877314Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1639bbdc-2aaa-4968-ad29-3e22153a6f98 name=/runtime.v1.RuntimeService/Version
	Apr 22 12:03:38 pause-253908 crio[2373]: time="2024-04-22 12:03:38.180643484Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d3836f6c-9af4-45dd-bd58-72fb0019cb20 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 12:03:38 pause-253908 crio[2373]: time="2024-04-22 12:03:38.181494667Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713787418181464435,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d3836f6c-9af4-45dd-bd58-72fb0019cb20 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 12:03:38 pause-253908 crio[2373]: time="2024-04-22 12:03:38.182355630Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0c824f98-3592-4150-a2b5-916a441ca611 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:03:38 pause-253908 crio[2373]: time="2024-04-22 12:03:38.182507345Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0c824f98-3592-4150-a2b5-916a441ca611 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 12:03:38 pause-253908 crio[2373]: time="2024-04-22 12:03:38.183765036Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:523eda3c2ab8f80eff05dd7d1e88881e2a78abe174394518a282c0dd4a1e820d,PodSandboxId:49678c14a817da2971dad324e45aa10c38e46bbed10391da1639e621e48abad2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713787401752579561,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzqm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea190b9-5bd9-4674-a3b8-bdc5a570f968,},Annotations:map[string]string{io.kubernetes.container.hash: 291d2b05,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fc8cafc02966b54e54268a2d4c374f58c975dadb9aed0080ac448d01833570a,PodSandboxId:dfbc0ab2d6067610a97488007717cf658c6ce0ad89b8d174639b6f5b3b891ef9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713787401403510512,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g6k67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 24d6aa75-1782-4786-89e8-922591a81986,},Annotations:map[string]string{io.kubernetes.container.hash: d69e02e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e899a93d79f4f434c3090ae4f521cf4a77cf11d0d86c0fc39f8a3a859de477b,PodSandboxId:a273e2e2eeee48c1a08864beb2b6c9021fd0eb8c9da5226b4d20909fe4a57910,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713787396647008145,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5ad272a14
93e5b272b1abb8c5c83078,},Annotations:map[string]string{io.kubernetes.container.hash: 1a2991e5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981d2d6d63e8de7ba29edb2287de8176003035a044b6405a584c6b96084a41a0,PodSandboxId:870b9c8e48c332386ae5c1672618e11f719ffa35209e4b70222f58b9a34c9deb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713787396342621524,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb27f7952a7420d51aa5450257e
91e7,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3b6146205338e6a1e1d50c3496b68d7eba4009f113c0391f1e3aba0855932b0,PodSandboxId:45562200369ac50ce5c6b2e9cacc0d3881fa654c548a63e20616946b204e4f5e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713787396329504198,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a779bf320e79fd7a238207a9693e7ba,},Annotations:map[string]string{io.kubernet
es.container.hash: 5d9fc288,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd5c3990237627a631e36507aee77d29ea52967df0b6e46e51b0e06cd73a8a2c,PodSandboxId:5cd0ea90f87a902e27e894187b2d4f378a295fffe68a08eff1856b239e136445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713787396308812291,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba4d5352a7dd71669287409cca46b471,},Annotations:map[string]string{io
.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf90c223253c36939ade0b1ead99909a2fa6f369ccb5853e200ec72b758cb4b0,PodSandboxId:870b9c8e48c332386ae5c1672618e11f719ffa35209e4b70222f58b9a34c9deb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713787391959825155,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fb27f7952a7420d51aa5450257e91e7,},Annotations:map[string]string{io.kubernetes.contain
er.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bed2d08de3cfa628fa16aea071722f4bb8fcc3457bcf689579e8ac0a6954ed8,PodSandboxId:45562200369ac50ce5c6b2e9cacc0d3881fa654c548a63e20616946b204e4f5e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713787391866967994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a779bf320e79fd7a238207a9693e7ba,},Annotations:map[string]string{io.kubernetes.container.hash: 5d9fc288,io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0813fabc9fa6ba17b6726b6d8301a1540805164f9f3a949ad51582a197097d2b,PodSandboxId:5cd0ea90f87a902e27e894187b2d4f378a295fffe68a08eff1856b239e136445,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713787391836025181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba4d5352a7dd71669287409cca46b471,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb1145b77344e2c000127bdd6c9597830f1d5a68c2e0f5c258f7b7976c489984,PodSandboxId:4560018c43edb5052a5cb74cd84ec02052ecace3b29f58d5387c92c69c427e8a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713787300781456327,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g6k67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d6aa75-1782-4786-89e8-922591a81986,},Annotations:map[string]string{io.kubernetes.container.hash: d69e02e5,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f018015c8868103b2edcf518d46e9d2d743f80366180fd28df34e13a3f133e66,PodSandboxId:0d4df0d32e1c8a6e049ecbf06bedcc9be3aefee9a99a2ce06ebffbfecc637632,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713787300567216896,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzqm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ea190b9-5bd9-4674-a3b8-bdc5a570f968,},Annotations:map[string]string{io.kubernetes.container.hash: 291d2b05,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c85b0b4fb2ae515dd9ca5c77f4209d6cb8984b16b4c7abe738e8a5136d778ab9,PodSandboxId:515fd181b7d351a4c5a648d779142c910ebf0f7ba310412789e876c5062e14e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713787277096422946,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-253908,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: e5ad272a1493e5b272b1abb8c5c83078,},Annotations:map[string]string{io.kubernetes.container.hash: 1a2991e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0c824f98-3592-4150-a2b5-916a441ca611 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	523eda3c2ab8f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 seconds ago       Running             coredns                   1                   49678c14a817d       coredns-7db6d8ff4d-fzqm5
	2fc8cafc02966       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   16 seconds ago       Running             kube-proxy                1                   dfbc0ab2d6067       kube-proxy-g6k67
	1e899a93d79f4       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   21 seconds ago       Running             kube-apiserver            1                   a273e2e2eeee4       kube-apiserver-pause-253908
	981d2d6d63e8d       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   21 seconds ago       Running             kube-scheduler            2                   870b9c8e48c33       kube-scheduler-pause-253908
	c3b6146205338       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   21 seconds ago       Running             etcd                      2                   45562200369ac       etcd-pause-253908
	cd5c399023762       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   21 seconds ago       Running             kube-controller-manager   2                   5cd0ea90f87a9       kube-controller-manager-pause-253908
	cf90c223253c3       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   26 seconds ago       Exited              kube-scheduler            1                   870b9c8e48c33       kube-scheduler-pause-253908
	3bed2d08de3cf       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   26 seconds ago       Exited              etcd                      1                   45562200369ac       etcd-pause-253908
	0813fabc9fa6b       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   26 seconds ago       Exited              kube-controller-manager   1                   5cd0ea90f87a9       kube-controller-manager-pause-253908
	eb1145b77344e       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   About a minute ago   Exited              kube-proxy                0                   4560018c43edb       kube-proxy-g6k67
	f018015c88681       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   0d4df0d32e1c8       coredns-7db6d8ff4d-fzqm5
	c85b0b4fb2ae5       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   2 minutes ago        Exited              kube-apiserver            0                   515fd181b7d35       kube-apiserver-pause-253908
	
	
	==> coredns [523eda3c2ab8f80eff05dd7d1e88881e2a78abe174394518a282c0dd4a1e820d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51788 - 61020 "HINFO IN 8314347899284632768.7451899747668725525. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014768681s
	
	
	==> coredns [f018015c8868103b2edcf518d46e9d2d743f80366180fd28df34e13a3f133e66] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45907 - 65095 "HINFO IN 1888660009761205459.1580346811320470488. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017106059s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[658685555]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Apr-2024 12:01:40.948) (total time: 30001ms):
	Trace[658685555]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:02:10.948)
	Trace[658685555]: [30.001599167s] [30.001599167s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1260799927]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Apr-2024 12:01:40.949) (total time: 30000ms):
	Trace[1260799927]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:02:10.950)
	Trace[1260799927]: [30.000723878s] [30.000723878s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2139704370]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Apr-2024 12:01:40.948) (total time: 30002ms):
	Trace[2139704370]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (12:02:10.950)
	Trace[2139704370]: [30.00248972s] [30.00248972s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-253908
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-253908
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3838931194b4975fce64faf7ca14560885944437
	                    minikube.k8s.io/name=pause-253908
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_22T12_01_23_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 12:01:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-253908
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 12:03:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 12:03:20 +0000   Mon, 22 Apr 2024 12:01:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 12:03:20 +0000   Mon, 22 Apr 2024 12:01:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 12:03:20 +0000   Mon, 22 Apr 2024 12:01:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 12:03:20 +0000   Mon, 22 Apr 2024 12:01:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.32
	  Hostname:    pause-253908
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 99d1c1cbc20b4119ab7bbfbfdcdf3e8f
	  System UUID:                99d1c1cb-c20b-4119-ab7b-bfbfdcdf3e8f
	  Boot ID:                    1f247bb8-64c6-4486-8498-04e55a6609e9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-fzqm5                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m
	  kube-system                 etcd-pause-253908                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m15s
	  kube-system                 kube-apiserver-pause-253908             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m15s
	  kube-system                 kube-controller-manager-pause-253908    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m15s
	  kube-system                 kube-proxy-g6k67                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 kube-scheduler-pause-253908             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 117s                   kube-proxy       
	  Normal  Starting                 16s                    kube-proxy       
	  Normal  Starting                 2m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m22s (x8 over 2m22s)  kubelet          Node pause-253908 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m22s (x8 over 2m22s)  kubelet          Node pause-253908 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m22s (x7 over 2m22s)  kubelet          Node pause-253908 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m16s                  kubelet          Node pause-253908 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  2m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m16s                  kubelet          Node pause-253908 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m16s                  kubelet          Node pause-253908 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m16s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m14s                  kubelet          Node pause-253908 status is now: NodeReady
	  Normal  RegisteredNode           2m2s                   node-controller  Node pause-253908 event: Registered Node pause-253908 in Controller
	  Normal  Starting                 23s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)      kubelet          Node pause-253908 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)      kubelet          Node pause-253908 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)      kubelet          Node pause-253908 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6s                     node-controller  Node pause-253908 event: Registered Node pause-253908 in Controller
	
	
	==> dmesg <==
	[  +0.063896] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.076267] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.180697] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.157406] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.355194] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +5.112672] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.073888] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.963292] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.066265] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.532768] systemd-fstab-generator[1285]: Ignoring "noauto" option for root device
	[  +0.109624] kauditd_printk_skb: 69 callbacks suppressed
	[ +15.842737] systemd-fstab-generator[1514]: Ignoring "noauto" option for root device
	[  +0.130243] kauditd_printk_skb: 21 callbacks suppressed
	[Apr22 12:02] kauditd_printk_skb: 67 callbacks suppressed
	[Apr22 12:03] systemd-fstab-generator[2171]: Ignoring "noauto" option for root device
	[  +0.166234] systemd-fstab-generator[2183]: Ignoring "noauto" option for root device
	[  +0.211358] systemd-fstab-generator[2197]: Ignoring "noauto" option for root device
	[  +0.166855] systemd-fstab-generator[2211]: Ignoring "noauto" option for root device
	[  +0.393129] systemd-fstab-generator[2238]: Ignoring "noauto" option for root device
	[  +1.845556] systemd-fstab-generator[2712]: Ignoring "noauto" option for root device
	[  +3.314026] systemd-fstab-generator[2896]: Ignoring "noauto" option for root device
	[  +0.074157] kauditd_printk_skb: 170 callbacks suppressed
	[  +5.638034] kauditd_printk_skb: 46 callbacks suppressed
	[ +10.745933] systemd-fstab-generator[3525]: Ignoring "noauto" option for root device
	[  +0.113342] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [3bed2d08de3cfa628fa16aea071722f4bb8fcc3457bcf689579e8ac0a6954ed8] <==
	{"level":"info","ts":"2024-04-22T12:03:12.469965Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"fbd4dd8524dacdec","initial-advertise-peer-urls":["https://192.168.50.32:2380"],"listen-peer-urls":["https://192.168.50.32:2380"],"advertise-client-urls":["https://192.168.50.32:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.32:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-22T12:03:13.643231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-22T12:03:13.64329Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-22T12:03:13.643331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec received MsgPreVoteResp from fbd4dd8524dacdec at term 2"}
	{"level":"info","ts":"2024-04-22T12:03:13.643344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec became candidate at term 3"}
	{"level":"info","ts":"2024-04-22T12:03:13.643349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec received MsgVoteResp from fbd4dd8524dacdec at term 3"}
	{"level":"info","ts":"2024-04-22T12:03:13.643358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec became leader at term 3"}
	{"level":"info","ts":"2024-04-22T12:03:13.643366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fbd4dd8524dacdec elected leader fbd4dd8524dacdec at term 3"}
	{"level":"info","ts":"2024-04-22T12:03:13.650616Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"fbd4dd8524dacdec","local-member-attributes":"{Name:pause-253908 ClientURLs:[https://192.168.50.32:2379]}","request-path":"/0/members/fbd4dd8524dacdec/attributes","cluster-id":"2484c988a436b7d1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-22T12:03:13.650665Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T12:03:13.651251Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T12:03:13.652951Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.32:2379"}
	{"level":"info","ts":"2024-04-22T12:03:13.65461Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-22T12:03:13.65497Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-22T12:03:13.655017Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-22T12:03:14.159238Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-22T12:03:14.159354Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-253908","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.32:2380"],"advertise-client-urls":["https://192.168.50.32:2379"]}
	{"level":"warn","ts":"2024-04-22T12:03:14.159585Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.32:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T12:03:14.159608Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.32:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T12:03:14.159741Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T12:03:14.159827Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-22T12:03:14.161232Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"fbd4dd8524dacdec","current-leader-member-id":"fbd4dd8524dacdec"}
	{"level":"info","ts":"2024-04-22T12:03:14.167548Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.50.32:2380"}
	{"level":"info","ts":"2024-04-22T12:03:14.167917Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.50.32:2380"}
	{"level":"info","ts":"2024-04-22T12:03:14.16793Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-253908","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.32:2380"],"advertise-client-urls":["https://192.168.50.32:2379"]}
	
	
	==> etcd [c3b6146205338e6a1e1d50c3496b68d7eba4009f113c0391f1e3aba0855932b0] <==
	{"level":"info","ts":"2024-04-22T12:03:16.956245Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-22T12:03:16.956355Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-22T12:03:16.95673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec switched to configuration voters=(18146372362501279212)"}
	{"level":"info","ts":"2024-04-22T12:03:16.961323Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2484c988a436b7d1","local-member-id":"fbd4dd8524dacdec","added-peer-id":"fbd4dd8524dacdec","added-peer-peer-urls":["https://192.168.50.32:2380"]}
	{"level":"info","ts":"2024-04-22T12:03:16.961625Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2484c988a436b7d1","local-member-id":"fbd4dd8524dacdec","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T12:03:16.961628Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-22T12:03:16.963855Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"fbd4dd8524dacdec","initial-advertise-peer-urls":["https://192.168.50.32:2380"],"listen-peer-urls":["https://192.168.50.32:2380"],"advertise-client-urls":["https://192.168.50.32:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.32:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-22T12:03:16.963903Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-22T12:03:16.961695Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.32:2380"}
	{"level":"info","ts":"2024-04-22T12:03:16.963964Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.32:2380"}
	{"level":"info","ts":"2024-04-22T12:03:16.971316Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T12:03:18.592259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec is starting a new election at term 3"}
	{"level":"info","ts":"2024-04-22T12:03:18.592386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec became pre-candidate at term 3"}
	{"level":"info","ts":"2024-04-22T12:03:18.592461Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec received MsgPreVoteResp from fbd4dd8524dacdec at term 3"}
	{"level":"info","ts":"2024-04-22T12:03:18.592493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec became candidate at term 4"}
	{"level":"info","ts":"2024-04-22T12:03:18.592517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec received MsgVoteResp from fbd4dd8524dacdec at term 4"}
	{"level":"info","ts":"2024-04-22T12:03:18.592543Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec became leader at term 4"}
	{"level":"info","ts":"2024-04-22T12:03:18.592568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fbd4dd8524dacdec elected leader fbd4dd8524dacdec at term 4"}
	{"level":"info","ts":"2024-04-22T12:03:18.598037Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"fbd4dd8524dacdec","local-member-attributes":"{Name:pause-253908 ClientURLs:[https://192.168.50.32:2379]}","request-path":"/0/members/fbd4dd8524dacdec/attributes","cluster-id":"2484c988a436b7d1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-22T12:03:18.598269Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T12:03:18.598405Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T12:03:18.598985Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-22T12:03:18.599048Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-22T12:03:18.601687Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-22T12:03:18.601785Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.32:2379"}
	
	
	==> kernel <==
	 12:03:38 up 2 min,  0 users,  load average: 0.50, 0.30, 0.12
	Linux pause-253908 5.10.207 #1 SMP Thu Apr 18 22:28:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1e899a93d79f4f434c3090ae4f521cf4a77cf11d0d86c0fc39f8a3a859de477b] <==
	I0422 12:03:20.221299       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0422 12:03:20.221724       1 aggregator.go:165] initial CRD sync complete...
	I0422 12:03:20.221787       1 autoregister_controller.go:141] Starting autoregister controller
	I0422 12:03:20.221795       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0422 12:03:20.258238       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0422 12:03:20.270108       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0422 12:03:20.270711       1 shared_informer.go:320] Caches are synced for configmaps
	I0422 12:03:20.270729       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0422 12:03:20.270744       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0422 12:03:20.274946       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0422 12:03:20.270756       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	E0422 12:03:20.300028       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0422 12:03:20.323210       1 cache.go:39] Caches are synced for autoregister controller
	I0422 12:03:20.333386       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0422 12:03:20.340080       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0422 12:03:20.340118       1 policy_source.go:224] refreshing policies
	I0422 12:03:20.421412       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0422 12:03:21.163120       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0422 12:03:22.261963       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0422 12:03:22.285101       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0422 12:03:22.354835       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0422 12:03:22.399517       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0422 12:03:22.410317       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0422 12:03:32.991129       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0422 12:03:33.138423       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [c85b0b4fb2ae515dd9ca5c77f4209d6cb8984b16b4c7abe738e8a5136d778ab9] <==
	I0422 12:01:38.250658       1 trace.go:236] Trace[1869007728]: "Create" accept:application/vnd.kubernetes.protobuf, */*,audit-id:b6e35568-8639-4b54-8c20-2bab9f1392f5,client:192.168.50.32,api-group:,api-version:v1,name:bootstrap-signer,subresource:token,namespace:kube-system,protocol:HTTP/2.0,resource:serviceaccounts,scope:resource,url:/api/v1/namespaces/kube-system/serviceaccounts/bootstrap-signer/token,user-agent:kube-controller-manager/v1.30.0 (linux/amd64) kubernetes/7c48c2b/kube-controller-manager,verb:POST (22-Apr-2024 12:01:36.855) (total time: 1395ms):
	Trace[1869007728]: ---"Write to database call succeeded" len:81 1395ms (12:01:38.250)
	Trace[1869007728]: [1.395266985s] [1.395266985s] END
	I0422 12:01:38.262687       1 trace.go:236] Trace[616800339]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:6eaf7e37-82a4-4c6a-9c7c-d0130551de8a,client:192.168.50.32,api-group:apps,api-version:v1,name:coredns,subresource:status,namespace:kube-system,protocol:HTTP/2.0,resource:deployments,scope:resource,url:/apis/apps/v1/namespaces/kube-system/deployments/coredns/status,user-agent:kube-controller-manager/v1.30.0 (linux/amd64) kubernetes/7c48c2b/system:serviceaccount:kube-system:deployment-controller,verb:PUT (22-Apr-2024 12:01:37.702) (total time: 560ms):
	Trace[616800339]: ["GuaranteedUpdate etcd3" audit-id:6eaf7e37-82a4-4c6a-9c7c-d0130551de8a,key:/deployments/kube-system/coredns,type:*apps.Deployment,resource:deployments.apps 559ms (12:01:37.702)
	Trace[616800339]:  ---"Txn call completed" 545ms (12:01:38.250)]
	Trace[616800339]: [560.330541ms] [560.330541ms] END
	I0422 12:01:38.267008       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0422 12:03:03.243635       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0422 12:03:03.244015       1 logging.go:59] [core] [Channel #12 SubChannel #14] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 12:03:03.244066       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 12:03:03.244100       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 12:03:03.257931       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 12:03:03.257994       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 12:03:03.258046       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 12:03:03.258112       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 12:03:03.258401       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 12:03:03.258451       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 12:03:03.258513       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 12:03:03.258897       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 12:03:03.258946       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 12:03:03.258997       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 12:03:03.259041       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 12:03:03.259079       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 12:03:03.259124       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [0813fabc9fa6ba17b6726b6d8301a1540805164f9f3a949ad51582a197097d2b] <==
	I0422 12:03:12.798318       1 serving.go:380] Generated self-signed cert in-memory
	I0422 12:03:13.378507       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0422 12:03:13.378574       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 12:03:13.380104       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0422 12:03:13.380343       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0422 12:03:13.380683       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0422 12:03:13.381275       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [cd5c3990237627a631e36507aee77d29ea52967df0b6e46e51b0e06cd73a8a2c] <==
	I0422 12:03:32.979528       1 shared_informer.go:320] Caches are synced for namespace
	I0422 12:03:32.981845       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0422 12:03:32.987347       1 shared_informer.go:320] Caches are synced for GC
	I0422 12:03:32.990247       1 shared_informer.go:320] Caches are synced for PVC protection
	I0422 12:03:32.991949       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0422 12:03:32.992089       1 shared_informer.go:320] Caches are synced for job
	I0422 12:03:32.994452       1 shared_informer.go:320] Caches are synced for attach detach
	I0422 12:03:32.994665       1 shared_informer.go:320] Caches are synced for ephemeral
	I0422 12:03:32.997934       1 shared_informer.go:320] Caches are synced for PV protection
	I0422 12:03:33.005534       1 shared_informer.go:320] Caches are synced for deployment
	I0422 12:03:33.006114       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0422 12:03:33.010294       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0422 12:03:33.014575       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0422 12:03:33.015000       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="95.007µs"
	I0422 12:03:33.015249       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0422 12:03:33.040727       1 shared_informer.go:320] Caches are synced for daemon sets
	I0422 12:03:33.049264       1 shared_informer.go:320] Caches are synced for stateful set
	I0422 12:03:33.063750       1 shared_informer.go:320] Caches are synced for HPA
	I0422 12:03:33.128836       1 shared_informer.go:320] Caches are synced for endpoint
	I0422 12:03:33.200399       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0422 12:03:33.209367       1 shared_informer.go:320] Caches are synced for resource quota
	I0422 12:03:33.235477       1 shared_informer.go:320] Caches are synced for resource quota
	I0422 12:03:33.637423       1 shared_informer.go:320] Caches are synced for garbage collector
	I0422 12:03:33.637583       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0422 12:03:33.644733       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [2fc8cafc02966b54e54268a2d4c374f58c975dadb9aed0080ac448d01833570a] <==
	I0422 12:03:21.669764       1 server_linux.go:69] "Using iptables proxy"
	I0422 12:03:21.696879       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.32"]
	I0422 12:03:21.803612       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 12:03:21.803740       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 12:03:21.803831       1 server_linux.go:165] "Using iptables Proxier"
	I0422 12:03:21.820585       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 12:03:21.820806       1 server.go:872] "Version info" version="v1.30.0"
	I0422 12:03:21.820823       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 12:03:21.824118       1 config.go:192] "Starting service config controller"
	I0422 12:03:21.824202       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 12:03:21.824257       1 config.go:101] "Starting endpoint slice config controller"
	I0422 12:03:21.824261       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 12:03:21.824697       1 config.go:319] "Starting node config controller"
	I0422 12:03:21.824703       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 12:03:21.925447       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0422 12:03:21.925617       1 shared_informer.go:320] Caches are synced for service config
	I0422 12:03:21.929300       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [eb1145b77344e2c000127bdd6c9597830f1d5a68c2e0f5c258f7b7976c489984] <==
	I0422 12:01:40.967087       1 server_linux.go:69] "Using iptables proxy"
	I0422 12:01:40.977042       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.32"]
	I0422 12:01:41.053934       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 12:01:41.054316       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 12:01:41.054516       1 server_linux.go:165] "Using iptables Proxier"
	I0422 12:01:41.071382       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 12:01:41.072376       1 server.go:872] "Version info" version="v1.30.0"
	I0422 12:01:41.072574       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 12:01:41.076428       1 config.go:101] "Starting endpoint slice config controller"
	I0422 12:01:41.076523       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 12:01:41.076643       1 config.go:192] "Starting service config controller"
	I0422 12:01:41.076718       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 12:01:41.093826       1 config.go:319] "Starting node config controller"
	I0422 12:01:41.093948       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 12:01:41.176858       1 shared_informer.go:320] Caches are synced for service config
	I0422 12:01:41.177101       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0422 12:01:41.194013       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [981d2d6d63e8de7ba29edb2287de8176003035a044b6405a584c6b96084a41a0] <==
	I0422 12:03:17.686013       1 serving.go:380] Generated self-signed cert in-memory
	W0422 12:03:20.225010       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0422 12:03:20.225092       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0422 12:03:20.225197       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0422 12:03:20.225229       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0422 12:03:20.264673       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0422 12:03:20.264764       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 12:03:20.266775       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0422 12:03:20.266979       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0422 12:03:20.267024       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0422 12:03:20.267071       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0422 12:03:20.367903       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [cf90c223253c36939ade0b1ead99909a2fa6f369ccb5853e200ec72b758cb4b0] <==
	I0422 12:03:13.427586       1 serving.go:380] Generated self-signed cert in-memory
	W0422 12:03:14.085333       1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://192.168.50.32:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.50.32:8443: connect: connection refused
	W0422 12:03:14.085437       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0422 12:03:14.085463       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0422 12:03:14.090721       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0422 12:03:14.090772       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 12:03:14.094695       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0422 12:03:14.094703       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0422 12:03:14.095026       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E0422 12:03:14.095121       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 22 12:03:16 pause-253908 kubelet[2903]: I0422 12:03:16.076508    2903 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ba4d5352a7dd71669287409cca46b471-flexvolume-dir\") pod \"kube-controller-manager-pause-253908\" (UID: \"ba4d5352a7dd71669287409cca46b471\") " pod="kube-system/kube-controller-manager-pause-253908"
	Apr 22 12:03:16 pause-253908 kubelet[2903]: I0422 12:03:16.076522    2903 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ba4d5352a7dd71669287409cca46b471-k8s-certs\") pod \"kube-controller-manager-pause-253908\" (UID: \"ba4d5352a7dd71669287409cca46b471\") " pod="kube-system/kube-controller-manager-pause-253908"
	Apr 22 12:03:16 pause-253908 kubelet[2903]: I0422 12:03:16.076549    2903 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3fb27f7952a7420d51aa5450257e91e7-kubeconfig\") pod \"kube-scheduler-pause-253908\" (UID: \"3fb27f7952a7420d51aa5450257e91e7\") " pod="kube-system/kube-scheduler-pause-253908"
	Apr 22 12:03:16 pause-253908 kubelet[2903]: I0422 12:03:16.178368    2903 kubelet_node_status.go:73] "Attempting to register node" node="pause-253908"
	Apr 22 12:03:16 pause-253908 kubelet[2903]: E0422 12:03:16.179429    2903 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.32:8443: connect: connection refused" node="pause-253908"
	Apr 22 12:03:16 pause-253908 kubelet[2903]: I0422 12:03:16.285490    2903 scope.go:117] "RemoveContainer" containerID="0813fabc9fa6ba17b6726b6d8301a1540805164f9f3a949ad51582a197097d2b"
	Apr 22 12:03:16 pause-253908 kubelet[2903]: I0422 12:03:16.287799    2903 scope.go:117] "RemoveContainer" containerID="3bed2d08de3cfa628fa16aea071722f4bb8fcc3457bcf689579e8ac0a6954ed8"
	Apr 22 12:03:16 pause-253908 kubelet[2903]: I0422 12:03:16.289254    2903 scope.go:117] "RemoveContainer" containerID="cf90c223253c36939ade0b1ead99909a2fa6f369ccb5853e200ec72b758cb4b0"
	Apr 22 12:03:16 pause-253908 kubelet[2903]: E0422 12:03:16.473033    2903 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-253908?timeout=10s\": dial tcp 192.168.50.32:8443: connect: connection refused" interval="800ms"
	Apr 22 12:03:16 pause-253908 kubelet[2903]: I0422 12:03:16.580800    2903 kubelet_node_status.go:73] "Attempting to register node" node="pause-253908"
	Apr 22 12:03:16 pause-253908 kubelet[2903]: E0422 12:03:16.582078    2903 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.32:8443: connect: connection refused" node="pause-253908"
	Apr 22 12:03:16 pause-253908 kubelet[2903]: W0422 12:03:16.937937    2903 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.32:8443: connect: connection refused
	Apr 22 12:03:16 pause-253908 kubelet[2903]: E0422 12:03:16.938022    2903 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.32:8443: connect: connection refused
	Apr 22 12:03:17 pause-253908 kubelet[2903]: I0422 12:03:17.384226    2903 kubelet_node_status.go:73] "Attempting to register node" node="pause-253908"
	Apr 22 12:03:20 pause-253908 kubelet[2903]: I0422 12:03:20.402490    2903 kubelet_node_status.go:112] "Node was previously registered" node="pause-253908"
	Apr 22 12:03:20 pause-253908 kubelet[2903]: I0422 12:03:20.403561    2903 kubelet_node_status.go:76] "Successfully registered node" node="pause-253908"
	Apr 22 12:03:20 pause-253908 kubelet[2903]: I0422 12:03:20.407488    2903 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 22 12:03:20 pause-253908 kubelet[2903]: I0422 12:03:20.409066    2903 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 22 12:03:20 pause-253908 kubelet[2903]: I0422 12:03:20.860464    2903 apiserver.go:52] "Watching apiserver"
	Apr 22 12:03:20 pause-253908 kubelet[2903]: I0422 12:03:20.864249    2903 topology_manager.go:215] "Topology Admit Handler" podUID="6ea190b9-5bd9-4674-a3b8-bdc5a570f968" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fzqm5"
	Apr 22 12:03:20 pause-253908 kubelet[2903]: I0422 12:03:20.864465    2903 topology_manager.go:215] "Topology Admit Handler" podUID="24d6aa75-1782-4786-89e8-922591a81986" podNamespace="kube-system" podName="kube-proxy-g6k67"
	Apr 22 12:03:20 pause-253908 kubelet[2903]: I0422 12:03:20.867581    2903 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 22 12:03:20 pause-253908 kubelet[2903]: I0422 12:03:20.910697    2903 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24d6aa75-1782-4786-89e8-922591a81986-xtables-lock\") pod \"kube-proxy-g6k67\" (UID: \"24d6aa75-1782-4786-89e8-922591a81986\") " pod="kube-system/kube-proxy-g6k67"
	Apr 22 12:03:20 pause-253908 kubelet[2903]: I0422 12:03:20.910858    2903 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24d6aa75-1782-4786-89e8-922591a81986-lib-modules\") pod \"kube-proxy-g6k67\" (UID: \"24d6aa75-1782-4786-89e8-922591a81986\") " pod="kube-system/kube-proxy-g6k67"
	Apr 22 12:03:26 pause-253908 kubelet[2903]: I0422 12:03:26.043343    2903 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-253908 -n pause-253908
helpers_test.go:261: (dbg) Run:  kubectl --context pause-253908 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (77.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (7200.074s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
E0422 12:29:09.117494   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/calico-230092/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
E0422 12:29:36.938540   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kindnet-230092/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
E0422 12:29:45.015665   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/auto-230092/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
E0422 12:30:24.106044   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/custom-flannel-230092/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
E0422 12:30:32.161310   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/calico-230092/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
E0422 12:31:04.777325   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/enable-default-cni-230092/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
E0422 12:31:17.644375   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
E0422 12:31:30.581325   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/flannel-230092/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
E0422 12:31:40.378363   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
E0422 12:31:47.152189   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/custom-flannel-230092/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
E0422 12:31:57.324075   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
E0422 12:32:11.732791   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/bridge-230092/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
E0422 12:32:27.822175   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/enable-default-cni-230092/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
E0422 12:32:53.624813   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/flannel-230092/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
E0422 12:33:13.891689   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/kindnet-230092/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
E0422 12:33:21.971307   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/auto-230092/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
E0422 12:33:34.778157   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/bridge-230092/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
E0422 12:34:09.116933   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/calico-230092/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
E0422 12:34:20.693992   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
E0422 12:35:24.106353   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/custom-flannel-230092/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
E0422 12:36:04.776639   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/enable-default-cni-230092/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
E0422 12:36:17.644172   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
E0422 12:36:30.582050   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/flannel-230092/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
E0422 12:36:57.324227   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
E0422 12:37:11.733176   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/bridge-230092/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.243:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.243:8443: connect: connection refused
panic: test timed out after 2h0m0s
running tests:
	TestNetworkPlugins (36m24s)
	TestNetworkPlugins/group (25m2s)
	TestStartStop (31m13s)
	TestStartStop/group/default-k8s-diff-port (25m38s)
	TestStartStop/group/default-k8s-diff-port/serial (25m38s)
	TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (3m26s)
	TestStartStop/group/embed-certs (23m11s)
	TestStartStop/group/embed-certs/serial (23m11s)
	TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (2m40s)
	TestStartStop/group/no-preload (26m10s)
	TestStartStop/group/no-preload/serial (26m10s)
	TestStartStop/group/no-preload/serial/AddonExistsAfterStop (2m6s)
	TestStartStop/group/old-k8s-version (26m50s)
	TestStartStop/group/old-k8s-version/serial (26m50s)
	TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (8m36s)

                                                
                                                
goroutine 7164 [running]:
testing.(*M).startAlarm.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 32 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000196d00, 0xc00080fbb0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000526420, {0x4955920, 0x2b, 0x2b}, {0x26ad44c?, 0xc0009a9200?, 0x4a11cc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0008a0dc0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0008a0dc0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:133 +0x195

                                                
                                                
goroutine 11 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00070fd80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 3445 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002eb35c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3444
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2221 [chan receive, 25 minutes]:
testing.(*testContext).waitParallel(0xc000a82f00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1665 +0x5e9
testing.tRunner(0xc0020f8d00, 0xc002570078)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2194
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3748 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0021a3150, 0x4)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x21456a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0038cc0c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0021a3180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000c6d510, {0x361a860, 0xc00380ae40}, 0x1, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000c6d510, 0x3b9aca00, 0x0, 0x1, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3697
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3594 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00285ec00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3590
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 627 [select, 109 minutes]:
net/http.(*persistConn).writeLoop(0xc0021ec5a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:2444 +0xf0
created by net/http.(*Transport).dialConn in goroutine 597
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 73 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 72
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 6094 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x363e4b0, 0xc0028aae60}, {0x3631bc0, 0xc0009043e0}, 0x1, 0x0, 0xc001fe9b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x363e520?, 0xc000543880?}, 0x3b9aca00, 0xc00006fd38?, 0x1, 0xc00006fb40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x363e520, 0xc000543880}, 0xc00206f6c0, {0xc002a02ca0, 0x1c}, {0x2678d83, 0x14}, {0x2690811, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAddonAfterStop({0x363e520, 0xc000543880}, 0xc00206f6c0, {0xc002a02ca0, 0x1c}, {0x267bc3f?, 0xc00080f760?}, {0x552353?, 0x4a26cf?}, {0xc0021db100, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:287 +0x13b
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00206f6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00206f6c0, 0xc002576080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3573
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2870 [chan receive, 29 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0021a3bc0, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2833
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 793 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00218f080)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 822
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 973 [chan send, 97 minutes]:
os/exec.(*Cmd).watchCtx(0xc00253c840, 0xc000a77500)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 972
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 3101 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0021a2190, 0x16)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x21456a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0038cc8a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0021a2200)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0034e6240, {0x361a860, 0xc00380a2a0}, 0x1, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0034e6240, 0x3b9aca00, 0x0, 0x1, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3207
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 178 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000c66f00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 91
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3311 [chan receive, 26 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0021a20c0, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3295
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 526 [select, 109 minutes]:
net/http.(*persistConn).writeLoop(0xc00223b0e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:2444 +0xf0
created by net/http.(*Transport).dialConn in goroutine 612
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 2883 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2882
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4560 [chan receive, 8 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000c6b240, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4558
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2869 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0020d91a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2833
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2784 [chan receive, 26 minutes]:
testing.(*T).Run(0xc000197d40, {0x2654613?, 0x0?}, 0xc000412b00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000197d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc000197d40, 0xc000c6ab00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2780
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1180 [chan send, 91 minutes]:
os/exec.(*Cmd).watchCtx(0xc0029f6f20, 0xc002b201e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 854
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2781 [chan receive, 26 minutes]:
testing.(*T).Run(0xc000196b60, {0x2654613?, 0x0?}, 0xc0033b4100)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000196b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc000196b60, 0xc000c6aa40)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2780
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 179 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00015f4c0, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 91
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 163 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc00015f450, 0x2d)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x21456a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000c66d20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00015f4c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00021b910, {0x361a860, 0xc000c6e090}, 0x1, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00021b910, 0x3b9aca00, 0x0, 0x1, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 179
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 164 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x363e6e0, 0xc000060360}, 0xc000a58f50, 0xc000a58f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x363e6e0, 0xc000060360}, 0xd?, 0xc000a58f50, 0xc000a58f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x363e6e0?, 0xc000060360?}, 0xc000037040?, 0x552c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x553be5?, 0xc000037040?, 0xc000c6a8c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 179
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 165 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 164
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3616 [sync.Cond.Wait, 6 minutes]:
sync.runtime_notifyListWait(0xc002b80110, 0x4)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x21456a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0033ded80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002b80140)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00092d7d0, {0x361a860, 0xc001fe5aa0}, 0x1, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00092d7d0, 0x3b9aca00, 0x0, 0x1, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3613
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3696 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0038cc1e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3692
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3617 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x363e6e0, 0xc000060360}, 0xc000095750, 0xc000095798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x363e6e0, 0xc000060360}, 0xd0?, 0xc000095750, 0xc000095798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x363e6e0?, 0xc000060360?}, 0xc00052cd70?, 0xc00052cd70?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000a0b5d8?, 0x9983a5?, 0xc00209b200?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3613
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3714 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3617
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3103 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3102
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3595 [chan receive, 26 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002b76e80, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3590
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 840 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 839
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3064 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x363e6e0, 0xc000060360}, 0xc000507750, 0xc0020adf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x363e6e0, 0xc000060360}, 0x0?, 0xc000507750, 0xc000507798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x363e6e0?, 0xc000060360?}, 0xc00210e820?, 0x552c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0005077d0?, 0x594064?, 0xc000a76600?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3079
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 626 [select, 109 minutes]:
net/http.(*persistConn).readLoop(0xc0021ec5a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 597
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 3207 [chan receive, 27 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0021a2200, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3185
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3337 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3336
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3451 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3450
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3310 [select, 2 minutes]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00380e960)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3295
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3697 [chan receive, 25 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0021a3180, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3692
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2234 [chan receive, 32 minutes]:
testing.(*T).Run(0xc00210ed00, {0x2653086?, 0x552353?}, 0x30c0028)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc00210ed00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc00210ed00, 0x30bfe50)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 724 [IO wait, 107 minutes]:
internal/poll.runtime_pollWait(0x7fc074569068, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0x11?, 0x3fe?, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc00070e180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc00070e180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0000b7000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0000b7000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0007d00f0, {0x3631500, 0xc0000b7000})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0007d00f0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc00206eea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 625
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 839 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x363e6e0, 0xc000060360}, 0xc000808f50, 0xc00234ef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x363e6e0, 0xc000060360}, 0x60?, 0xc000808f50, 0xc000808f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x363e6e0?, 0xc000060360?}, 0xc000808e30?, 0x7f9a20?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x594005?, 0xc000705b80?, 0xc0022b0f60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 794
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2811 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0008fc850, 0x16)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x21456a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002fcce40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0008fc880)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00049e070, {0x361a860, 0xc0009205a0}, 0x1, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00049e070, 0x3b9aca00, 0x0, 0x1, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2842
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 525 [select, 109 minutes]:
net/http.(*persistConn).readLoop(0xc00223b0e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 612
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 2882 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x363e6e0, 0xc000060360}, 0xc002892750, 0xc00258ff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x363e6e0, 0xc000060360}, 0xe0?, 0xc002892750, 0xc002892798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x363e6e0?, 0xc000060360?}, 0xc0028927b0?, 0x7b9db8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x594005?, 0xc0020c4dc0?, 0xc00081e7e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2870
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 794 [chan receive, 97 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0021c6400, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 822
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3335 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc002b76e50, 0x16)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x21456a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00285eae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002b76e80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0005a11a0, {0x361a860, 0xc0023f6270}, 0x1, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0005a11a0, 0x3b9aca00, 0x0, 0x1, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3595
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3078 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002a49bc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3077
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 6374 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x363e520, 0xc00045db20}, {0x3631bc0, 0xc002578520}, 0x1, 0x0, 0xc00006fb40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x363e520?, 0xc0004b4f50?}, 0x3b9aca00, 0xc001fe9d38?, 0x1, 0xc001fe9b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x363e520, 0xc0004b4f50}, 0xc0000364e0, {0xc0025d53c8, 0x12}, {0x2678d83, 0x14}, {0x2690811, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAddonAfterStop({0x363e520, 0xc0004b4f50}, 0xc0000364e0, {0xc0025d53c8, 0x12}, {0x2660354?, 0xc0026dbf60?}, {0x552353?, 0x4a26cf?}, {0xc0021db000, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:287 +0x13b
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0000364e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0000364e0, 0xc0029a2000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3820
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2802 [chan receive, 25 minutes]:
testing.(*T).Run(0xc0023f41a0, {0x2654613?, 0x0?}, 0xc00070f780)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0023f41a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0023f41a0, 0xc000c6abc0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2780
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2194 [chan receive, 37 minutes]:
testing.(*T).Run(0xc00210e680, {0x2653086?, 0x55249c?}, 0xc002570078)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc00210e680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc00210e680, 0x30bfe08)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2813 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2812
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2842 [chan receive, 29 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0008fc880, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2840
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3749 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x363e6e0, 0xc000060360}, 0xc002896f50, 0xc002896f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x363e6e0, 0xc000060360}, 0x30?, 0xc002896f50, 0xc002896f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x363e6e0?, 0xc000060360?}, 0xc0000364e0?, 0x552c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002896fd0?, 0x594064?, 0xc002864630?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3697
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3364 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x363e6e0, 0xc000060360}, 0xc002891750, 0xc002891798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x363e6e0, 0xc000060360}, 0xa0?, 0xc002891750, 0xc002891798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x363e6e0?, 0xc000060360?}, 0xc0028917b0?, 0x99de58?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0028917d0?, 0x594064?, 0xc002d088a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3311
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1039 [chan send, 97 minutes]:
os/exec.(*Cmd).watchCtx(0xc00293fa20, 0xc00291af60)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1038
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2780 [chan receive, 32 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000196680, 0x30c0028)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2234
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3079 [chan receive, 29 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002b81940, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3077
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3446 [chan receive, 26 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0021a38c0, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3444
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 4523 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4522
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2865 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0021a3b90, 0x16)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x21456a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0020d8e40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0021a3bc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000a6b610, {0x361a860, 0xc00219ec30}, 0x1, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000a6b610, 0x3b9aca00, 0x0, 0x1, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2870
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3820 [chan receive, 2 minutes]:
testing.(*T).Run(0xc000036680, {0x2678de7?, 0x60400000004?}, 0xc0029a2000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc000036680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc000036680, 0xc00070f780)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2802
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2783 [chan receive, 26 minutes]:
testing.(*T).Run(0xc000197ba0, {0x2654613?, 0x0?}, 0xc00089de00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000197ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc000197ba0, 0xc000c6aac0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2780
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2841 [select, 2 minutes]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002fccf60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2840
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 838 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0021c63d0, 0x28)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x21456a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00218ef60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0021c6400)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000826e50, {0x361a860, 0xc002a11ef0}, 0x1, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000826e50, 0x3b9aca00, 0x0, 0x1, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 794
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3612 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0033deea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3608
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3363 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0021a2090, 0x16)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x21456a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00380e840)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0021a20c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00366a580, {0x361a860, 0xc00241c150}, 0x1, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00366a580, 0x3b9aca00, 0x0, 0x1, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3311
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 6563 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x363e4b0, 0xc0007ef770}, {0x3631bc0, 0xc00070a420}, 0x1, 0x0, 0xc001fedb40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x363e520?, 0xc000411f10?}, 0x3b9aca00, 0xc001fe9d38?, 0x1, 0xc001fe9b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x363e520, 0xc000411f10}, 0xc000037380, {0xc0025d96b0, 0x11}, {0x2678d83, 0x14}, {0x2690811, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAddonAfterStop({0x363e520, 0xc000411f10}, 0xc000037380, {0xc0025d96b0, 0x11}, {0x265e178?, 0xc0021a9f60?}, {0x552353?, 0x4a26cf?}, {0xc0021da700, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:287 +0x13b
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc000037380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc000037380, 0xc002576100)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3460
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3613 [chan receive, 25 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002b80140, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3608
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2812 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x363e6e0, 0xc000060360}, 0xc002893750, 0xc002893798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x363e6e0, 0xc000060360}, 0x0?, 0xc002893750, 0xc002893798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x363e6e0?, 0xc000060360?}, 0xc0016e3ce0?, 0xc0025a4580?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x594005?, 0xc000704c60?, 0xc002b20300?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2842
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3102 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x363e6e0, 0xc000060360}, 0xc00080f750, 0xc00258ef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x363e6e0, 0xc000060360}, 0xe0?, 0xc00080f750, 0xc00080f798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x363e6e0?, 0xc000060360?}, 0xc000037ba0?, 0x552c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x594005?, 0xc003804000?, 0xc000a764e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3207
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3365 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3364
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3460 [chan receive, 2 minutes]:
testing.(*T).Run(0xc0023f4ea0, {0x2678de7?, 0x60400000004?}, 0xc002576100)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0023f4ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0023f4ea0, 0xc000412b00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2784
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3573 [chan receive, 3 minutes]:
testing.(*T).Run(0xc0023f5380, {0x2678de7?, 0x60400000004?}, 0xc002576080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0023f5380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0023f5380, 0xc00089de00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2783
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3450 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x363e6e0, 0xc000060360}, 0xc0024a6f50, 0xc0024a6f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x363e6e0, 0xc000060360}, 0x80?, 0xc0024a6f50, 0xc0024a6f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x363e6e0?, 0xc000060360?}, 0xc0024a6fb0?, 0x99de58?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x594005?, 0xc002180580?, 0xc002d08d80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3446
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3290 [chan receive, 8 minutes]:
testing.(*T).Run(0xc0020f8820, {0x267ea96?, 0x60400000004?}, 0xc00089c000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0020f8820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0020f8820, 0xc0033b4100)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2781
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3750 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3749
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3206 [select, 2 minutes]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0038cc9c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3185
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3063 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc002b81910, 0x16)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x21456a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002a49aa0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002b81940)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002be5650, {0x361a860, 0xc0025a32c0}, 0x1, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002be5650, 0x3b9aca00, 0x0, 0x1, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3079
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3065 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3064
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4558 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x363e520, 0xc000119960}, {0x3631bc0, 0xc002364140}, 0x1, 0x0, 0xc002047c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x363e520?, 0xc000294000?}, 0x3b9aca00, 0xc002057e10?, 0x1, 0xc002057c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x363e520, 0xc000294000}, 0xc00206f1e0, {0xc0025d4048, 0x16}, {0x2678d83, 0x14}, {0x2690811, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x363e520, 0xc000294000}, 0xc00206f1e0, {0xc0025d4048, 0x16}, {0x266a267?, 0xc00080af60?}, {0x552353?, 0x4a26cf?}, {0xc0033ba180, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00206f1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00206f1e0, 0xc00089c000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3290
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3449 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0021a3890, 0x16)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x21456a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002eb34a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0021a38c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000703590, {0x361a860, 0xc000a7f950}, 0x1, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000703590, 0x3b9aca00, 0x0, 0x1, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3446
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3336 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x363e6e0, 0xc000060360}, 0xc0026e1f50, 0xc0026e1f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x363e6e0, 0xc000060360}, 0x7?, 0xc0026e1f50, 0xc0026e1f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x363e6e0?, 0xc000060360?}, 0xc000197a00?, 0x552c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0026e1fd0?, 0x594064?, 0xc002dfed80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3595
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 4522 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x363e6e0, 0xc000060360}, 0xc002557750, 0xc002557798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x363e6e0, 0xc000060360}, 0x1c?, 0xc002557750, 0xc002557798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x363e6e0?, 0xc000060360?}, 0xc0020f9a00?, 0x552c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0025577d0?, 0x594064?, 0xc0022ea200?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4560
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 4521 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc000c6b210, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x21456a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000c67860)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000c6b240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000a5c730, {0x361a860, 0xc0025044b0}, 0x1, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000a5c730, 0x3b9aca00, 0x0, 0x1, 0xc000060360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4560
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 4015 [IO wait]:
internal/poll.runtime_pollWait(0x7fc074568d80, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc002577600?, 0xc00208c800?, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002577600, {0xc00208c800, 0x800, 0x800})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc002577600, {0xc00208c800?, 0x7fc074386168?, 0xc000a0b350?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0025ec338, {0xc00208c800?, 0xc002383938?, 0x41567b?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/net.go:179 +0x45
crypto/tls.(*atLeastReader).Read(0xc000a0b350, {0xc00208c800?, 0x0?, 0xc000a0b350?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc00217ed30, {0x361b020, 0xc000a0b350})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc00217ea88, {0x361a3e0, 0xc0025ec338}, 0xc002383980?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc00217ea88, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc00217ea88, {0xc0009df000, 0x1000, 0xc0026e5880?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc002eb30e0, {0xc002bd6200, 0x9, 0x4911bf0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x36194e0, 0xc002eb30e0}, {0xc002bd6200, 0x9, 0x9}, 0x9)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:335 +0x90
io.ReadFull(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc002bd6200, 0x9, 0x2383dc0?}, {0x36194e0?, 0xc002eb30e0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc002bd61c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/frame.go:498 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc002383fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/transport.go:2429 +0xd8
golang.org/x/net/http2.(*ClientConn).readLoop(0xc0020ecc00)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/transport.go:2325 +0x65
created by golang.org/x/net/http2.(*ClientConn).goRun in goroutine 4014
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/transport.go:369 +0x2d

                                                
                                                
goroutine 3964 [IO wait]:
internal/poll.runtime_pollWait(0x7fc074569350, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00070ea80?, 0xc002064000?, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00070ea80, {0xc002064000, 0x800, 0x800})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc00070ea80, {0xc002064000?, 0xc000650500?, 0x2?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0025ec020, {0xc002064000?, 0xc00206405f?, 0x70?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/net.go:179 +0x45
crypto/tls.(*atLeastReader).Read(0xc000a0b698, {0xc002064000?, 0x0?, 0xc000a0b698?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc00217e9b0, {0x361b020, 0xc000a0b698})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc00217e708, {0x7fc0745b7090, 0xc002570000}, 0xc001ff1980?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc00217e708, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc00217e708, {0xc003021000, 0x1000, 0xc0026e5880?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc002475a40, {0xc000c76200, 0x9, 0x4911bf0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x36194e0, 0xc002475a40}, {0xc000c76200, 0x9, 0x9}, 0x9)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:335 +0x90
io.ReadFull(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc000c76200, 0x9, 0x1ff1dc0?}, {0x36194e0?, 0xc002475a40?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc000c761c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/frame.go:498 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc001ff1fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/transport.go:2429 +0xd8
golang.org/x/net/http2.(*ClientConn).readLoop(0xc0033ba300)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/transport.go:2325 +0x65
created by golang.org/x/net/http2.(*ClientConn).goRun in goroutine 3963
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/transport.go:369 +0x2d

                                                
                                                
goroutine 4559 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000c67980)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 4558
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 4121 [IO wait]:
internal/poll.runtime_pollWait(0x7fc074057c38, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0033b4800?, 0xc000831800?, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0033b4800, {0xc000831800, 0x800, 0x800})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc0033b4800, {0xc000831800?, 0xc0020788c0?, 0x2?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc001f9b1e8, {0xc000831800?, 0xc00083185f?, 0x70?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/net.go:179 +0x45
crypto/tls.(*atLeastReader).Read(0xc000a0b518, {0xc000831800?, 0x0?, 0xc000a0b518?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc0002357b0, {0x361b020, 0xc000a0b518})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc000235508, {0x7fc0745b7090, 0xc002be6ff0}, 0xc0020a6980?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc000235508, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc000235508, {0xc002256000, 0x1000, 0xc0026e5880?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc002482840, {0xc0033ac660, 0x9, 0x4911bf0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x36194e0, 0xc002482840}, {0xc0033ac660, 0x9, 0x9}, 0x9)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:335 +0x90
io.ReadFull(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0033ac660, 0x9, 0x20a6dc0?}, {0x36194e0?, 0xc002482840?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0033ac620)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/frame.go:498 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0020a6fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/transport.go:2429 +0xd8
golang.org/x/net/http2.(*ClientConn).readLoop(0xc002068f00)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/transport.go:2325 +0x65
created by golang.org/x/net/http2.(*ClientConn).goRun in goroutine 4120
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/transport.go:369 +0x2d

                                                
                                    

Test pass (174/221)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 24.75
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.0/json-events 13.29
13 TestDownloadOnly/v1.30.0/preload-exists 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.07
18 TestDownloadOnly/v1.30.0/DeleteAll 0.13
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.57
22 TestOffline 64.7
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 214.29
29 TestAddons/parallel/Registry 17.9
31 TestAddons/parallel/InspektorGadget 10.99
33 TestAddons/parallel/HelmTiller 15.77
35 TestAddons/parallel/CSI 50.9
36 TestAddons/parallel/Headlamp 17.17
37 TestAddons/parallel/CloudSpanner 5.67
38 TestAddons/parallel/LocalPath 13.23
39 TestAddons/parallel/NvidiaDevicePlugin 5.58
40 TestAddons/parallel/Yakd 6.01
43 TestAddons/serial/GCPAuth/Namespaces 0.12
45 TestCertOptions 51.28
46 TestCertExpiration 278.64
48 TestForceSystemdFlag 84.16
49 TestForceSystemdEnv 98.47
51 TestKVMDriverInstallOrUpdate 4.65
55 TestErrorSpam/setup 48.18
56 TestErrorSpam/start 0.37
57 TestErrorSpam/status 0.78
58 TestErrorSpam/pause 1.7
59 TestErrorSpam/unpause 1.73
60 TestErrorSpam/stop 4.69
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 98.76
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 37.97
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.07
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.2
72 TestFunctional/serial/CacheCmd/cache/add_local 2.42
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
74 TestFunctional/serial/CacheCmd/cache/list 0.06
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.66
77 TestFunctional/serial/CacheCmd/cache/delete 0.11
78 TestFunctional/serial/MinikubeKubectlCmd 0.12
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
80 TestFunctional/serial/ExtraConfig 409.48
81 TestFunctional/serial/ComponentHealth 0.06
82 TestFunctional/serial/LogsCmd 1.25
83 TestFunctional/serial/LogsFileCmd 1.35
84 TestFunctional/serial/InvalidService 4.26
86 TestFunctional/parallel/ConfigCmd 0.43
88 TestFunctional/parallel/DryRun 0.34
89 TestFunctional/parallel/InternationalLanguage 0.17
90 TestFunctional/parallel/StatusCmd 1.05
94 TestFunctional/parallel/ServiceCmdConnect 10.66
95 TestFunctional/parallel/AddonsCmd 0.17
96 TestFunctional/parallel/PersistentVolumeClaim 44.34
98 TestFunctional/parallel/SSHCmd 0.52
99 TestFunctional/parallel/CpCmd 1.6
100 TestFunctional/parallel/MySQL 27.85
101 TestFunctional/parallel/FileSync 0.3
102 TestFunctional/parallel/CertSync 1.75
106 TestFunctional/parallel/NodeLabels 0.08
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.6
110 TestFunctional/parallel/License 0.48
111 TestFunctional/parallel/ServiceCmd/DeployApp 11.23
121 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
122 TestFunctional/parallel/ProfileCmd/profile_list 0.45
123 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
124 TestFunctional/parallel/MountCmd/any-port 9.68
125 TestFunctional/parallel/ServiceCmd/List 0.33
126 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
127 TestFunctional/parallel/ServiceCmd/HTTPS 0.42
128 TestFunctional/parallel/ServiceCmd/Format 0.49
129 TestFunctional/parallel/MountCmd/specific-port 1.9
130 TestFunctional/parallel/ServiceCmd/URL 0.41
131 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
132 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
133 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
134 TestFunctional/parallel/Version/short 0.06
135 TestFunctional/parallel/Version/components 0.62
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
140 TestFunctional/parallel/ImageCommands/ImageBuild 3.63
141 TestFunctional/parallel/ImageCommands/Setup 2.14
142 TestFunctional/parallel/MountCmd/VerifyCleanup 1.53
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.07
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 11.44
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.94
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.33
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.61
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.53
150 TestFunctional/delete_addon-resizer_images 0.07
151 TestFunctional/delete_my-image_image 0.01
152 TestFunctional/delete_minikube_cached_images 0.01
156 TestMultiControlPlane/serial/StartCluster 211.02
157 TestMultiControlPlane/serial/DeployApp 6.9
158 TestMultiControlPlane/serial/PingHostFromPods 1.4
159 TestMultiControlPlane/serial/AddWorkerNode 48.66
160 TestMultiControlPlane/serial/NodeLabels 0.07
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.56
162 TestMultiControlPlane/serial/CopyFile 13.52
164 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.48
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.42
168 TestMultiControlPlane/serial/DeleteSecondaryNode 17.7
169 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.38
171 TestMultiControlPlane/serial/RestartCluster 500.81
172 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.39
173 TestMultiControlPlane/serial/AddSecondaryNode 76.94
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.56
178 TestJSONOutput/start/Command 61.87
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 0.78
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 0.72
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 7.41
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.21
206 TestMainNoArgs 0.06
207 TestMinikubeProfile 96.02
210 TestMountStart/serial/StartWithMountFirst 30.98
211 TestMountStart/serial/VerifyMountFirst 0.38
212 TestMountStart/serial/StartWithMountSecond 27.22
213 TestMountStart/serial/VerifyMountSecond 0.38
214 TestMountStart/serial/DeleteFirst 0.69
215 TestMountStart/serial/VerifyMountPostDelete 0.41
216 TestMountStart/serial/Stop 1.41
217 TestMountStart/serial/RestartStopped 23.21
218 TestMountStart/serial/VerifyMountPostStop 0.38
221 TestMultiNode/serial/FreshStart2Nodes 136.21
222 TestMultiNode/serial/DeployApp2Nodes 5.46
223 TestMultiNode/serial/PingHostFrom2Pods 0.91
224 TestMultiNode/serial/AddNode 45.66
225 TestMultiNode/serial/MultiNodeLabels 0.06
226 TestMultiNode/serial/ProfileList 0.23
227 TestMultiNode/serial/CopyFile 7.49
228 TestMultiNode/serial/StopNode 2.49
229 TestMultiNode/serial/StartAfterStop 32.14
231 TestMultiNode/serial/DeleteNode 2.22
233 TestMultiNode/serial/RestartMultiNode 173.7
234 TestMultiNode/serial/ValidateNameConflict 47.93
241 TestScheduledStopUnix 116.52
245 TestRunningBinaryUpgrade 193.4
250 TestPause/serial/Start 127.2
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
253 TestNoKubernetes/serial/StartWithK8s 122.89
265 TestNoKubernetes/serial/StartWithStopK8s 48.11
267 TestNoKubernetes/serial/Start 34.71
268 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
269 TestNoKubernetes/serial/ProfileList 1.55
270 TestNoKubernetes/serial/Stop 2.33
271 TestNoKubernetes/serial/StartNoArgs 70.11
272 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
273 TestStoppedBinaryUpgrade/Setup 2.59
274 TestStoppedBinaryUpgrade/Upgrade 101.94
283 TestStoppedBinaryUpgrade/MinikubeLogs 0.99
x
+
TestDownloadOnly/v1.20.0/json-events (24.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-692083 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-692083 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (24.749510883s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (24.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-692083
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-692083: exit status 85 (70.547172ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-692083 | jenkins | v1.33.0 | 22 Apr 24 10:37 UTC |          |
	|         | -p download-only-692083        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 10:37:43
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 10:37:43.490906   14957 out.go:291] Setting OutFile to fd 1 ...
	I0422 10:37:43.491101   14957 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 10:37:43.491170   14957 out.go:304] Setting ErrFile to fd 2...
	I0422 10:37:43.491304   14957 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 10:37:43.491523   14957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
	W0422 10:37:43.491697   14957 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18711-7633/.minikube/config/config.json: open /home/jenkins/minikube-integration/18711-7633/.minikube/config/config.json: no such file or directory
	I0422 10:37:43.492260   14957 out.go:298] Setting JSON to true
	I0422 10:37:43.493176   14957 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1207,"bootTime":1713781057,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 10:37:43.493240   14957 start.go:139] virtualization: kvm guest
	I0422 10:37:43.495553   14957 out.go:97] [download-only-692083] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 10:37:43.496943   14957 out.go:169] MINIKUBE_LOCATION=18711
	W0422 10:37:43.495641   14957 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball: no such file or directory
	I0422 10:37:43.495664   14957 notify.go:220] Checking for updates...
	I0422 10:37:43.499625   14957 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 10:37:43.501032   14957 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18711-7633/kubeconfig
	I0422 10:37:43.502288   14957 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 10:37:43.503593   14957 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0422 10:37:43.506148   14957 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0422 10:37:43.506404   14957 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 10:37:43.612010   14957 out.go:97] Using the kvm2 driver based on user configuration
	I0422 10:37:43.612034   14957 start.go:297] selected driver: kvm2
	I0422 10:37:43.612040   14957 start.go:901] validating driver "kvm2" against <nil>
	I0422 10:37:43.612363   14957 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 10:37:43.612496   14957 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18711-7633/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 10:37:43.627455   14957 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 10:37:43.627509   14957 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0422 10:37:43.627967   14957 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0422 10:37:43.628111   14957 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0422 10:37:43.628165   14957 cni.go:84] Creating CNI manager for ""
	I0422 10:37:43.628180   14957 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 10:37:43.628188   14957 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0422 10:37:43.628245   14957 start.go:340] cluster config:
	{Name:download-only-692083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-692083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 10:37:43.628431   14957 iso.go:125] acquiring lock: {Name:mkb6ac9fd17ffabc92a94047094130aad6203a95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 10:37:43.630377   14957 out.go:97] Downloading VM boot image ...
	I0422 10:37:43.630422   14957 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18711-7633/.minikube/cache/iso/amd64/minikube-v1.33.0-amd64.iso
	I0422 10:37:53.069822   14957 out.go:97] Starting "download-only-692083" primary control-plane node in "download-only-692083" cluster
	I0422 10:37:53.069855   14957 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0422 10:37:53.179447   14957 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0422 10:37:53.179501   14957 cache.go:56] Caching tarball of preloaded images
	I0422 10:37:53.179672   14957 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0422 10:37:53.181720   14957 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0422 10:37:53.181746   14957 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0422 10:37:53.291662   14957 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-692083 host does not exist
	  To start a cluster, run: "minikube start -p download-only-692083"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-692083
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (13.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-205366 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-205366 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.289176151s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (13.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-205366
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-205366: exit status 85 (69.191469ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-692083 | jenkins | v1.33.0 | 22 Apr 24 10:37 UTC |                     |
	|         | -p download-only-692083        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0 | 22 Apr 24 10:38 UTC | 22 Apr 24 10:38 UTC |
	| delete  | -p download-only-692083        | download-only-692083 | jenkins | v1.33.0 | 22 Apr 24 10:38 UTC | 22 Apr 24 10:38 UTC |
	| start   | -o=json --download-only        | download-only-205366 | jenkins | v1.33.0 | 22 Apr 24 10:38 UTC |                     |
	|         | -p download-only-205366        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 10:38:08
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 10:38:08.575872   15212 out.go:291] Setting OutFile to fd 1 ...
	I0422 10:38:08.576091   15212 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 10:38:08.576100   15212 out.go:304] Setting ErrFile to fd 2...
	I0422 10:38:08.576105   15212 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 10:38:08.576299   15212 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
	I0422 10:38:08.576862   15212 out.go:298] Setting JSON to true
	I0422 10:38:08.577657   15212 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1232,"bootTime":1713781057,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 10:38:08.577714   15212 start.go:139] virtualization: kvm guest
	I0422 10:38:08.579946   15212 out.go:97] [download-only-205366] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 10:38:08.581546   15212 out.go:169] MINIKUBE_LOCATION=18711
	I0422 10:38:08.580135   15212 notify.go:220] Checking for updates...
	I0422 10:38:08.584256   15212 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 10:38:08.585816   15212 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18711-7633/kubeconfig
	I0422 10:38:08.587229   15212 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 10:38:08.588435   15212 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0422 10:38:08.590612   15212 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0422 10:38:08.590839   15212 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 10:38:08.622043   15212 out.go:97] Using the kvm2 driver based on user configuration
	I0422 10:38:08.622074   15212 start.go:297] selected driver: kvm2
	I0422 10:38:08.622080   15212 start.go:901] validating driver "kvm2" against <nil>
	I0422 10:38:08.622391   15212 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 10:38:08.622452   15212 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18711-7633/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 10:38:08.637141   15212 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 10:38:08.637185   15212 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0422 10:38:08.637624   15212 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0422 10:38:08.637752   15212 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0422 10:38:08.637813   15212 cni.go:84] Creating CNI manager for ""
	I0422 10:38:08.637825   15212 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 10:38:08.637835   15212 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0422 10:38:08.637880   15212 start.go:340] cluster config:
	{Name:download-only-205366 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-205366 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 10:38:08.637959   15212 iso.go:125] acquiring lock: {Name:mkb6ac9fd17ffabc92a94047094130aad6203a95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 10:38:08.639622   15212 out.go:97] Starting "download-only-205366" primary control-plane node in "download-only-205366" cluster
	I0422 10:38:08.639644   15212 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 10:38:08.750318   15212 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0422 10:38:08.750374   15212 cache.go:56] Caching tarball of preloaded images
	I0422 10:38:08.750548   15212 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 10:38:08.752432   15212 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0422 10:38:08.752451   15212 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 ...
	I0422 10:38:08.869948   15212 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:5927bd9d05f26d08fc05540d1d92e5d8 -> /home/jenkins/minikube-integration/18711-7633/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-205366 host does not exist
	  To start a cluster, run: "minikube start -p download-only-205366"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-205366
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-683094 --alsologtostderr --binary-mirror http://127.0.0.1:40437 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-683094" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-683094
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (64.7s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-251606 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-251606 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m3.644722964s)
helpers_test.go:175: Cleaning up "offline-crio-251606" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-251606
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-251606: (1.0523172s)
--- PASS: TestOffline (64.70s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-649657
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-649657: exit status 85 (57.656472ms)

                                                
                                                
-- stdout --
	* Profile "addons-649657" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-649657"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-649657
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-649657: exit status 85 (64.689983ms)

                                                
                                                
-- stdout --
	* Profile "addons-649657" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-649657"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (214.29s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-649657 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-649657 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m34.288973766s)
--- PASS: TestAddons/Setup (214.29s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 24.08827ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-nqc7x" [b64590e0-a02f-45d2-8f1e-198288db17c6] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005251116s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-kvfwc" [8ff782c8-8bc1-4ee5-96c7-36c9b42dd909] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007353864s
addons_test.go:340: (dbg) Run:  kubectl --context addons-649657 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-649657 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-649657 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.964048129s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-649657 ip
2024/04/22 10:42:14 [DEBUG] GET http://192.168.39.194:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-649657 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.90s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.99s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ct8lx" [7eaf01e1-1b71-4b80-b4f6-0d59303afff6] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005466326s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-649657
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-649657: (5.983952324s)
--- PASS: TestAddons/parallel/InspektorGadget (10.99s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (15.77s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 25.830655ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-6gjgv" [8fff0c69-9c68-4af8-962b-aa26874d6504] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.009680136s
addons_test.go:473: (dbg) Run:  kubectl --context addons-649657 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-649657 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (9.979953822s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-649657 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (15.77s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.9s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 31.192822ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-649657 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649657 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649657 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-649657 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [87ac5ea3-0a6e-4f9a-86df-98dca1736bf1] Pending
helpers_test.go:344: "task-pv-pod" [87ac5ea3-0a6e-4f9a-86df-98dca1736bf1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [87ac5ea3-0a6e-4f9a-86df-98dca1736bf1] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.004690927s
addons_test.go:584: (dbg) Run:  kubectl --context addons-649657 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-649657 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-649657 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-649657 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-649657 delete pod task-pv-pod: (1.545806587s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-649657 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-649657 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649657 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-649657 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [3d5e6511-89f2-49e9-990a-87a9c234f89e] Pending
helpers_test.go:344: "task-pv-pod-restore" [3d5e6511-89f2-49e9-990a-87a9c234f89e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [3d5e6511-89f2-49e9-990a-87a9c234f89e] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.01025349s
addons_test.go:626: (dbg) Run:  kubectl --context addons-649657 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-649657 delete pod task-pv-pod-restore: (1.954004708s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-649657 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-649657 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-649657 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-649657 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.122192927s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-649657 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (50.90s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-649657 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-649657 --alsologtostderr -v=1: (1.164326017s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7559bf459f-bb5x7" [e60be124-1cbb-461a-b07a-c7ad8934897d] Pending
helpers_test.go:344: "headlamp-7559bf459f-bb5x7" [e60be124-1cbb-461a-b07a-c7ad8934897d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-bb5x7" [e60be124-1cbb-461a-b07a-c7ad8934897d] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.003633414s
--- PASS: TestAddons/parallel/Headlamp (17.17s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-8677549d7-vlh2c" [420ec078-5632-4fc2-9fb1-e28cf20e69c4] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004620125s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-649657
--- PASS: TestAddons/parallel/CloudSpanner (5.67s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (13.23s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-649657 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-649657 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649657 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649657 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649657 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649657 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649657 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649657 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649657 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [9d0df8e9-b969-4e53-81f2-122643e6b283] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [9d0df8e9-b969-4e53-81f2-122643e6b283] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [9d0df8e9-b969-4e53-81f2-122643e6b283] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.004854994s
addons_test.go:891: (dbg) Run:  kubectl --context addons-649657 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-649657 ssh "cat /opt/local-path-provisioner/pvc-60f66f58-3d14-4dd8-976b-05bdb591f503_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-649657 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-649657 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-649657 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (13.23s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-w4vxc" [3bfb0bd5-3242-4f72-9f7c-0c79543badd2] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00621199s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-649657
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.58s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-rz9f2" [5d5608ee-50a3-46d3-9363-9bef97083ea4] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005248078s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-649657 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-649657 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (51.28s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-890154 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0422 12:06:17.644365   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-890154 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (50.035034666s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-890154 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-890154 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-890154 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-890154" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-890154
--- PASS: TestCertOptions (51.28s)

                                                
                                    
x
+
TestCertExpiration (278.64s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-454029 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-454029 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m13.364231105s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-454029 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-454029 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (24.291264314s)
helpers_test.go:175: Cleaning up "cert-expiration-454029" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-454029
--- PASS: TestCertExpiration (278.64s)

                                                
                                    
x
+
TestForceSystemdFlag (84.16s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-905296 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0422 12:01:57.324906   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-905296 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m22.858694301s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-905296 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-905296" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-905296
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-905296: (1.082075136s)
--- PASS: TestForceSystemdFlag (84.16s)

                                                
                                    
x
+
TestForceSystemdEnv (98.47s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-262232 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-262232 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m37.478175383s)
helpers_test.go:175: Cleaning up "force-systemd-env-262232" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-262232
--- PASS: TestForceSystemdEnv (98.47s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.65s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.65s)

                                                
                                    
x
+
TestErrorSpam/setup (48.18s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-210381 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-210381 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-210381 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-210381 --driver=kvm2  --container-runtime=crio: (48.175961757s)
--- PASS: TestErrorSpam/setup (48.18s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-210381 --log_dir /tmp/nospam-210381 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-210381 --log_dir /tmp/nospam-210381 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-210381 --log_dir /tmp/nospam-210381 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-210381 --log_dir /tmp/nospam-210381 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-210381 --log_dir /tmp/nospam-210381 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-210381 --log_dir /tmp/nospam-210381 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-210381 --log_dir /tmp/nospam-210381 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-210381 --log_dir /tmp/nospam-210381 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-210381 --log_dir /tmp/nospam-210381 pause
--- PASS: TestErrorSpam/pause (1.70s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-210381 --log_dir /tmp/nospam-210381 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-210381 --log_dir /tmp/nospam-210381 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-210381 --log_dir /tmp/nospam-210381 unpause
--- PASS: TestErrorSpam/unpause (1.73s)

                                                
                                    
x
+
TestErrorSpam/stop (4.69s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-210381 --log_dir /tmp/nospam-210381 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-210381 --log_dir /tmp/nospam-210381 stop: (2.291126137s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-210381 --log_dir /tmp/nospam-210381 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-210381 --log_dir /tmp/nospam-210381 stop: (1.232623757s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-210381 --log_dir /tmp/nospam-210381 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-210381 --log_dir /tmp/nospam-210381 stop: (1.167446456s)
--- PASS: TestErrorSpam/stop (4.69s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18711-7633/.minikube/files/etc/test/nested/copy/14945/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (98.76s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-668059 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0422 10:51:57.324912   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
E0422 10:51:57.330653   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
E0422 10:51:57.340986   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
E0422 10:51:57.361177   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
E0422 10:51:57.401488   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
E0422 10:51:57.481806   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
E0422 10:51:57.642208   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
E0422 10:51:57.962777   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
E0422 10:51:58.603775   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
E0422 10:51:59.884240   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
E0422 10:52:02.445138   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
E0422 10:52:07.565836   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
E0422 10:52:17.806108   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
E0422 10:52:38.286645   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
E0422 10:53:19.247762   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-668059 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m38.761743069s)
--- PASS: TestFunctional/serial/StartWithProxy (98.76s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.97s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-668059 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-668059 --alsologtostderr -v=8: (37.967424159s)
functional_test.go:659: soft start took 37.968009061s for "functional-668059" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.97s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-668059 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-668059 cache add registry.k8s.io/pause:3.3: (1.180740849s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-668059 cache add registry.k8s.io/pause:latest: (1.041450224s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-668059 /tmp/TestFunctionalserialCacheCmdcacheadd_local2654508188/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 cache add minikube-local-cache-test:functional-668059
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-668059 cache add minikube-local-cache-test:functional-668059: (2.060804689s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 cache delete minikube-local-cache-test:functional-668059
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-668059
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-668059 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (225.134712ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 kubectl -- --context functional-668059 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-668059 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (409.48s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-668059 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0422 10:54:41.170505   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
E0422 10:56:57.327180   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
E0422 10:57:25.010771   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-668059 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (6m49.484327136s)
functional_test.go:757: restart took 6m49.484489863s for "functional-668059" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (409.48s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-668059 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-668059 logs: (1.250211549s)
--- PASS: TestFunctional/serial/LogsCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 logs --file /tmp/TestFunctionalserialLogsFileCmd1965272524/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-668059 logs --file /tmp/TestFunctionalserialLogsFileCmd1965272524/001/logs.txt: (1.35342174s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.26s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-668059 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-668059
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-668059: exit status 115 (281.329159ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.220:31991 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-668059 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.26s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-668059 config get cpus: exit status 14 (70.526216ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-668059 config get cpus: exit status 14 (58.846147ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-668059 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-668059 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (155.559991ms)

                                                
                                                
-- stdout --
	* [functional-668059] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18711-7633/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18711-7633/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 11:01:29.874521   24968 out.go:291] Setting OutFile to fd 1 ...
	I0422 11:01:29.874639   24968 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:01:29.874647   24968 out.go:304] Setting ErrFile to fd 2...
	I0422 11:01:29.874652   24968 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:01:29.874837   24968 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
	I0422 11:01:29.875349   24968 out.go:298] Setting JSON to false
	I0422 11:01:29.876240   24968 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":2633,"bootTime":1713781057,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 11:01:29.876299   24968 start.go:139] virtualization: kvm guest
	I0422 11:01:29.879277   24968 out.go:177] * [functional-668059] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 11:01:29.881032   24968 notify.go:220] Checking for updates...
	I0422 11:01:29.881051   24968 out.go:177]   - MINIKUBE_LOCATION=18711
	I0422 11:01:29.882819   24968 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 11:01:29.884382   24968 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18711-7633/kubeconfig
	I0422 11:01:29.885971   24968 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 11:01:29.887475   24968 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 11:01:29.888951   24968 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 11:01:29.890902   24968 config.go:182] Loaded profile config "functional-668059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:01:29.891369   24968 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:01:29.891433   24968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:01:29.906455   24968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39003
	I0422 11:01:29.906991   24968 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:01:29.907508   24968 main.go:141] libmachine: Using API Version  1
	I0422 11:01:29.907534   24968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:01:29.907962   24968 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:01:29.908171   24968 main.go:141] libmachine: (functional-668059) Calling .DriverName
	I0422 11:01:29.908461   24968 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 11:01:29.908764   24968 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:01:29.908824   24968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:01:29.923680   24968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42219
	I0422 11:01:29.924082   24968 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:01:29.924573   24968 main.go:141] libmachine: Using API Version  1
	I0422 11:01:29.924596   24968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:01:29.924916   24968 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:01:29.925120   24968 main.go:141] libmachine: (functional-668059) Calling .DriverName
	I0422 11:01:29.961248   24968 out.go:177] * Using the kvm2 driver based on existing profile
	I0422 11:01:29.962927   24968 start.go:297] selected driver: kvm2
	I0422 11:01:29.962939   24968 start.go:901] validating driver "kvm2" against &{Name:functional-668059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterNa
me:functional-668059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 11:01:29.963034   24968 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 11:01:29.965409   24968 out.go:177] 
	W0422 11:01:29.966961   24968 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0422 11:01:29.968467   24968 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-668059 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-668059 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-668059 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (168.622861ms)

                                                
                                                
-- stdout --
	* [functional-668059] minikube v1.33.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18711-7633/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18711-7633/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 11:01:30.214826   25085 out.go:291] Setting OutFile to fd 1 ...
	I0422 11:01:30.215038   25085 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:01:30.215073   25085 out.go:304] Setting ErrFile to fd 2...
	I0422 11:01:30.215089   25085 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:01:30.215679   25085 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
	I0422 11:01:30.216786   25085 out.go:298] Setting JSON to false
	I0422 11:01:30.217967   25085 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":2633,"bootTime":1713781057,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 11:01:30.218050   25085 start.go:139] virtualization: kvm guest
	I0422 11:01:30.224814   25085 out.go:177] * [functional-668059] minikube v1.33.0 sur Ubuntu 20.04 (kvm/amd64)
	I0422 11:01:30.226517   25085 notify.go:220] Checking for updates...
	I0422 11:01:30.226525   25085 out.go:177]   - MINIKUBE_LOCATION=18711
	I0422 11:01:30.228100   25085 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 11:01:30.229632   25085 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18711-7633/kubeconfig
	I0422 11:01:30.231087   25085 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18711-7633/.minikube
	I0422 11:01:30.232573   25085 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 11:01:30.234040   25085 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 11:01:30.236171   25085 config.go:182] Loaded profile config "functional-668059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:01:30.236764   25085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:01:30.236862   25085 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:01:30.256167   25085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35671
	I0422 11:01:30.256596   25085 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:01:30.257335   25085 main.go:141] libmachine: Using API Version  1
	I0422 11:01:30.257357   25085 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:01:30.257757   25085 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:01:30.257967   25085 main.go:141] libmachine: (functional-668059) Calling .DriverName
	I0422 11:01:30.258232   25085 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 11:01:30.258657   25085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:01:30.258708   25085 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:01:30.273793   25085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37273
	I0422 11:01:30.274346   25085 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:01:30.275014   25085 main.go:141] libmachine: Using API Version  1
	I0422 11:01:30.275039   25085 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:01:30.275404   25085 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:01:30.275627   25085 main.go:141] libmachine: (functional-668059) Calling .DriverName
	I0422 11:01:30.310511   25085 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0422 11:01:30.312187   25085 start.go:297] selected driver: kvm2
	I0422 11:01:30.312200   25085 start.go:901] validating driver "kvm2" against &{Name:functional-668059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713569670-18702@sha256:1db7a12e122807eaef46d49daa14095f818f9bfb653fcf62060f6eb507c1f0d8 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterNa
me:functional-668059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 11:01:30.312304   25085 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 11:01:30.314662   25085 out.go:177] 
	W0422 11:01:30.316351   25085 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0422 11:01:30.317781   25085 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-668059 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-668059 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-fwc4t" [8e20b825-ece8-407b-8cf5-d0ee1251f79d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-fwc4t" [8e20b825-ece8-407b-8cf5-d0ee1251f79d] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004706117s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.220:31285
functional_test.go:1671: http://192.168.39.220:31285: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-fwc4t

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.220:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.220:31285
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [bcaa2046-0f49-4a0b-a444-81c8e4daf200] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004790105s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-668059 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-668059 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-668059 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-668059 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-668059 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bca3b53a-5e22-4189-9807-d06d682abbac] Pending
helpers_test.go:344: "sp-pod" [bca3b53a-5e22-4189-9807-d06d682abbac] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bca3b53a-5e22-4189-9807-d06d682abbac] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.008553488s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-668059 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-668059 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-668059 delete -f testdata/storage-provisioner/pod.yaml: (3.673475994s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-668059 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b28a4a6f-1613-4ede-9ef7-9ce0d306ae0b] Pending
helpers_test.go:344: "sp-pod" [b28a4a6f-1613-4ede-9ef7-9ce0d306ae0b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b28a4a6f-1613-4ede-9ef7-9ce0d306ae0b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004485072s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-668059 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.34s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh -n functional-668059 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 cp functional-668059:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4246004516/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh -n functional-668059 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh -n functional-668059 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-668059 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-78smp" [0a14d820-ef5d-443b-9c56-4bd9150647c1] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-78smp" [0a14d820-ef5d-443b-9c56-4bd9150647c1] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 25.005927056s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-668059 exec mysql-64454c8b5c-78smp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-668059 exec mysql-64454c8b5c-78smp -- mysql -ppassword -e "show databases;": exit status 1 (206.687999ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-668059 exec mysql-64454c8b5c-78smp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-668059 exec mysql-64454c8b5c-78smp -- mysql -ppassword -e "show databases;": exit status 1 (153.538585ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-668059 exec mysql-64454c8b5c-78smp -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.85s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/14945/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh "sudo cat /etc/test/nested/copy/14945/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/14945.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh "sudo cat /etc/ssl/certs/14945.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/14945.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh "sudo cat /usr/share/ca-certificates/14945.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/149452.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh "sudo cat /etc/ssl/certs/149452.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/149452.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh "sudo cat /usr/share/ca-certificates/149452.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-668059 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-668059 ssh "sudo systemctl is-active docker": exit status 1 (298.82105ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-668059 ssh "sudo systemctl is-active containerd": exit status 1 (300.016712ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-668059 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-668059 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-2fmdd" [3e141bfa-a355-4da3-aaa2-59333c8f91ed] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-2fmdd" [3e141bfa-a355-4da3-aaa2-59333c8f91ed] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004353123s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "388.807787ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "61.861308ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "320.986683ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "58.253933ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-668059 /tmp/TestFunctionalparallelMountCmdany-port2807216047/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1713783680309811525" to /tmp/TestFunctionalparallelMountCmdany-port2807216047/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1713783680309811525" to /tmp/TestFunctionalparallelMountCmdany-port2807216047/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1713783680309811525" to /tmp/TestFunctionalparallelMountCmdany-port2807216047/001/test-1713783680309811525
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-668059 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (300.349415ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 22 11:01 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 22 11:01 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 22 11:01 test-1713783680309811525
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh cat /mount-9p/test-1713783680309811525
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-668059 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [7e74d2ed-5b70-476e-82ca-8cfc199a0fe2] Pending
helpers_test.go:344: "busybox-mount" [7e74d2ed-5b70-476e-82ca-8cfc199a0fe2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [7e74d2ed-5b70-476e-82ca-8cfc199a0fe2] Running
helpers_test.go:344: "busybox-mount" [7e74d2ed-5b70-476e-82ca-8cfc199a0fe2] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [7e74d2ed-5b70-476e-82ca-8cfc199a0fe2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.004867103s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-668059 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-668059 /tmp/TestFunctionalparallelMountCmdany-port2807216047/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 service list -o json
functional_test.go:1490: Took "325.023918ms" to run "out/minikube-linux-amd64 -p functional-668059 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.220:32222
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-668059 /tmp/TestFunctionalparallelMountCmdspecific-port781275773/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-668059 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (328.219746ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-668059 /tmp/TestFunctionalparallelMountCmdspecific-port781275773/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-668059 ssh "sudo umount -f /mount-9p": exit status 1 (286.261706ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-668059 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-668059 /tmp/TestFunctionalparallelMountCmdspecific-port781275773/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.220:32222
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-668059 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-668059
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-668059
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-668059 image ls --format short --alsologtostderr:
I0422 11:02:02.355096   26462 out.go:291] Setting OutFile to fd 1 ...
I0422 11:02:02.356367   26462 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 11:02:02.356380   26462 out.go:304] Setting ErrFile to fd 2...
I0422 11:02:02.356387   26462 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 11:02:02.356830   26462 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
I0422 11:02:02.357463   26462 config.go:182] Loaded profile config "functional-668059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0422 11:02:02.357580   26462 config.go:182] Loaded profile config "functional-668059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0422 11:02:02.357976   26462 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0422 11:02:02.358015   26462 main.go:141] libmachine: Launching plugin server for driver kvm2
I0422 11:02:02.372994   26462 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44047
I0422 11:02:02.373438   26462 main.go:141] libmachine: () Calling .GetVersion
I0422 11:02:02.374004   26462 main.go:141] libmachine: Using API Version  1
I0422 11:02:02.374032   26462 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 11:02:02.374348   26462 main.go:141] libmachine: () Calling .GetMachineName
I0422 11:02:02.374659   26462 main.go:141] libmachine: (functional-668059) Calling .GetState
I0422 11:02:02.378362   26462 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0422 11:02:02.378411   26462 main.go:141] libmachine: Launching plugin server for driver kvm2
I0422 11:02:02.393260   26462 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35229
I0422 11:02:02.393586   26462 main.go:141] libmachine: () Calling .GetVersion
I0422 11:02:02.394259   26462 main.go:141] libmachine: Using API Version  1
I0422 11:02:02.394278   26462 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 11:02:02.394640   26462 main.go:141] libmachine: () Calling .GetMachineName
I0422 11:02:02.394837   26462 main.go:141] libmachine: (functional-668059) Calling .DriverName
I0422 11:02:02.395007   26462 ssh_runner.go:195] Run: systemctl --version
I0422 11:02:02.395032   26462 main.go:141] libmachine: (functional-668059) Calling .GetSSHHostname
I0422 11:02:02.397885   26462 main.go:141] libmachine: (functional-668059) DBG | domain functional-668059 has defined MAC address 52:54:00:0f:9a:cb in network mk-functional-668059
I0422 11:02:02.398258   26462 main.go:141] libmachine: (functional-668059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:cb", ip: ""} in network mk-functional-668059: {Iface:virbr1 ExpiryTime:2024-04-22 11:52:12 +0000 UTC Type:0 Mac:52:54:00:0f:9a:cb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:functional-668059 Clientid:01:52:54:00:0f:9a:cb}
I0422 11:02:02.398276   26462 main.go:141] libmachine: (functional-668059) DBG | domain functional-668059 has defined IP address 192.168.39.220 and MAC address 52:54:00:0f:9a:cb in network mk-functional-668059
I0422 11:02:02.398534   26462 main.go:141] libmachine: (functional-668059) Calling .GetSSHPort
I0422 11:02:02.398674   26462 main.go:141] libmachine: (functional-668059) Calling .GetSSHKeyPath
I0422 11:02:02.398810   26462 main.go:141] libmachine: (functional-668059) Calling .GetSSHUsername
I0422 11:02:02.398922   26462 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/functional-668059/id_rsa Username:docker}
I0422 11:02:02.490872   26462 ssh_runner.go:195] Run: sudo crictl images --output json
I0422 11:02:02.586551   26462 main.go:141] libmachine: Making call to close driver server
I0422 11:02:02.586575   26462 main.go:141] libmachine: (functional-668059) Calling .Close
I0422 11:02:02.586825   26462 main.go:141] libmachine: Successfully made call to close driver server
I0422 11:02:02.586841   26462 main.go:141] libmachine: Making call to close connection to plugin binary
I0422 11:02:02.586853   26462 main.go:141] libmachine: Making call to close driver server
I0422 11:02:02.586861   26462 main.go:141] libmachine: (functional-668059) Calling .Close
I0422 11:02:02.587075   26462 main.go:141] libmachine: Successfully made call to close driver server
I0422 11:02:02.587106   26462 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-668059 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4950bb10b3f87 | 65.3MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-proxy              | v1.30.0            | a0bf559e280cf | 85.9MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-scheduler          | v1.30.0            | 259c8277fcbbc | 63MB   |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/google-containers/addon-resizer  | functional-668059  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/library/nginx                 | latest             | 2ac752d7aeb1d | 192MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| localhost/minikube-local-cache-test     | functional-668059  | 3bd89cf1f623f | 3.33kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.30.0            | c42f13656d0b2 | 118MB  |
| registry.k8s.io/kube-controller-manager | v1.30.0            | c7aad43836fa5 | 112MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-668059 image ls --format table --alsologtostderr:
I0422 11:02:02.627799   26529 out.go:291] Setting OutFile to fd 1 ...
I0422 11:02:02.627946   26529 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 11:02:02.627957   26529 out.go:304] Setting ErrFile to fd 2...
I0422 11:02:02.627961   26529 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 11:02:02.628154   26529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
I0422 11:02:02.628697   26529 config.go:182] Loaded profile config "functional-668059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0422 11:02:02.628819   26529 config.go:182] Loaded profile config "functional-668059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0422 11:02:02.629178   26529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0422 11:02:02.629214   26529 main.go:141] libmachine: Launching plugin server for driver kvm2
I0422 11:02:02.643784   26529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33257
I0422 11:02:02.644283   26529 main.go:141] libmachine: () Calling .GetVersion
I0422 11:02:02.644826   26529 main.go:141] libmachine: Using API Version  1
I0422 11:02:02.644849   26529 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 11:02:02.645279   26529 main.go:141] libmachine: () Calling .GetMachineName
I0422 11:02:02.645562   26529 main.go:141] libmachine: (functional-668059) Calling .GetState
I0422 11:02:02.647688   26529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0422 11:02:02.647721   26529 main.go:141] libmachine: Launching plugin server for driver kvm2
I0422 11:02:02.672339   26529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42069
I0422 11:02:02.672758   26529 main.go:141] libmachine: () Calling .GetVersion
I0422 11:02:02.673206   26529 main.go:141] libmachine: Using API Version  1
I0422 11:02:02.673232   26529 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 11:02:02.673583   26529 main.go:141] libmachine: () Calling .GetMachineName
I0422 11:02:02.673748   26529 main.go:141] libmachine: (functional-668059) Calling .DriverName
I0422 11:02:02.673939   26529 ssh_runner.go:195] Run: systemctl --version
I0422 11:02:02.673967   26529 main.go:141] libmachine: (functional-668059) Calling .GetSSHHostname
I0422 11:02:02.676661   26529 main.go:141] libmachine: (functional-668059) DBG | domain functional-668059 has defined MAC address 52:54:00:0f:9a:cb in network mk-functional-668059
I0422 11:02:02.677112   26529 main.go:141] libmachine: (functional-668059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:cb", ip: ""} in network mk-functional-668059: {Iface:virbr1 ExpiryTime:2024-04-22 11:52:12 +0000 UTC Type:0 Mac:52:54:00:0f:9a:cb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:functional-668059 Clientid:01:52:54:00:0f:9a:cb}
I0422 11:02:02.677144   26529 main.go:141] libmachine: (functional-668059) DBG | domain functional-668059 has defined IP address 192.168.39.220 and MAC address 52:54:00:0f:9a:cb in network mk-functional-668059
I0422 11:02:02.677275   26529 main.go:141] libmachine: (functional-668059) Calling .GetSSHPort
I0422 11:02:02.677488   26529 main.go:141] libmachine: (functional-668059) Calling .GetSSHKeyPath
I0422 11:02:02.677905   26529 main.go:141] libmachine: (functional-668059) Calling .GetSSHUsername
I0422 11:02:02.678424   26529 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/functional-668059/id_rsa Username:docker}
I0422 11:02:02.760164   26529 ssh_runner.go:195] Run: sudo crictl images --output json
I0422 11:02:02.812429   26529 main.go:141] libmachine: Making call to close driver server
I0422 11:02:02.812452   26529 main.go:141] libmachine: (functional-668059) Calling .Close
I0422 11:02:02.812802   26529 main.go:141] libmachine: Successfully made call to close driver server
I0422 11:02:02.812817   26529 main.go:141] libmachine: Making call to close connection to plugin binary
I0422 11:02:02.812830   26529 main.go:141] libmachine: Making call to close driver server
I0422 11:02:02.812838   26529 main.go:141] libmachine: (functional-668059) Calling .Close
I0422 11:02:02.814729   26529 main.go:141] libmachine: (functional-668059) DBG | Closing plugin on server side
I0422 11:02:02.814767   26529 main.go:141] libmachine: Successfully made call to close driver server
I0422 11:02:02.814800   26529 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-668059 image ls --format json --alsologtostderr:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c
936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","repoDigests":["registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81","registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"117609952"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/ku
bernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"3bd89cf1f623f62b2581f758d26cf63e6cb668f91865aa43facdd1d1eb327577","repoDigests":["localhost/minikube-local-cache-test@sha256:5ddf3df3423ee24e521b0319c90310ae74707b4a0066f33fdee87789e9c9845d"],"repoTags":["localhost/minikube-local-cache-test:functional-668059"],"size":"3330"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/ki
ndnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"65291810"},{"id":"2ac752d7aeb1d9281f708e7c51501c41baf90de15ffc9bca7c5d38b8da41b580","repoDigests":["docker.io/library/nginx@sha256:0463a96ac74b84a8a1b27f3d1f4ae5d1a70ea823219394e131f5bf3536674419","docker.io/library/nginx@sha256:b5873c5e785c0ae70b4f999d6719a27441126667088c2edd1eaf3060e4868ec5"],"repoTags":["docker.io/library/nginx:latest"],"size":"191703878"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-668059"],"size":"34114467"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@s
ha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe","registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.0"],"size":"112170310"},{"id":"a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","repoDigests":["registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68","registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"85932953"},{"id":"da86e6ba
6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67","registry
.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0"],"size":"63026502"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-668059 image ls --format json --alsologtostderr:
I0422 11:02:02.346090   26460 out.go:291] Setting OutFile to fd 1 ...
I0422 11:02:02.346200   26460 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 11:02:02.346221   26460 out.go:304] Setting ErrFile to fd 2...
I0422 11:02:02.346225   26460 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 11:02:02.346431   26460 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
I0422 11:02:02.350195   26460 config.go:182] Loaded profile config "functional-668059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0422 11:02:02.350348   26460 config.go:182] Loaded profile config "functional-668059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0422 11:02:02.350697   26460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0422 11:02:02.350741   26460 main.go:141] libmachine: Launching plugin server for driver kvm2
I0422 11:02:02.366087   26460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42197
I0422 11:02:02.366485   26460 main.go:141] libmachine: () Calling .GetVersion
I0422 11:02:02.367106   26460 main.go:141] libmachine: Using API Version  1
I0422 11:02:02.367132   26460 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 11:02:02.367515   26460 main.go:141] libmachine: () Calling .GetMachineName
I0422 11:02:02.367702   26460 main.go:141] libmachine: (functional-668059) Calling .GetState
I0422 11:02:02.369721   26460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0422 11:02:02.369755   26460 main.go:141] libmachine: Launching plugin server for driver kvm2
I0422 11:02:02.386669   26460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36289
I0422 11:02:02.387072   26460 main.go:141] libmachine: () Calling .GetVersion
I0422 11:02:02.387581   26460 main.go:141] libmachine: Using API Version  1
I0422 11:02:02.387616   26460 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 11:02:02.387947   26460 main.go:141] libmachine: () Calling .GetMachineName
I0422 11:02:02.388111   26460 main.go:141] libmachine: (functional-668059) Calling .DriverName
I0422 11:02:02.388298   26460 ssh_runner.go:195] Run: systemctl --version
I0422 11:02:02.388322   26460 main.go:141] libmachine: (functional-668059) Calling .GetSSHHostname
I0422 11:02:02.391147   26460 main.go:141] libmachine: (functional-668059) DBG | domain functional-668059 has defined MAC address 52:54:00:0f:9a:cb in network mk-functional-668059
I0422 11:02:02.391574   26460 main.go:141] libmachine: (functional-668059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:cb", ip: ""} in network mk-functional-668059: {Iface:virbr1 ExpiryTime:2024-04-22 11:52:12 +0000 UTC Type:0 Mac:52:54:00:0f:9a:cb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:functional-668059 Clientid:01:52:54:00:0f:9a:cb}
I0422 11:02:02.391597   26460 main.go:141] libmachine: (functional-668059) DBG | domain functional-668059 has defined IP address 192.168.39.220 and MAC address 52:54:00:0f:9a:cb in network mk-functional-668059
I0422 11:02:02.391733   26460 main.go:141] libmachine: (functional-668059) Calling .GetSSHPort
I0422 11:02:02.391874   26460 main.go:141] libmachine: (functional-668059) Calling .GetSSHKeyPath
I0422 11:02:02.392026   26460 main.go:141] libmachine: (functional-668059) Calling .GetSSHUsername
I0422 11:02:02.392163   26460 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/functional-668059/id_rsa Username:docker}
I0422 11:02:02.472880   26460 ssh_runner.go:195] Run: sudo crictl images --output json
I0422 11:02:02.554197   26460 main.go:141] libmachine: Making call to close driver server
I0422 11:02:02.554208   26460 main.go:141] libmachine: (functional-668059) Calling .Close
I0422 11:02:02.554608   26460 main.go:141] libmachine: (functional-668059) DBG | Closing plugin on server side
I0422 11:02:02.554618   26460 main.go:141] libmachine: Successfully made call to close driver server
I0422 11:02:02.554640   26460 main.go:141] libmachine: Making call to close connection to plugin binary
I0422 11:02:02.554653   26460 main.go:141] libmachine: Making call to close driver server
I0422 11:02:02.554664   26460 main.go:141] libmachine: (functional-668059) Calling .Close
I0422 11:02:02.554912   26460 main.go:141] libmachine: (functional-668059) DBG | Closing plugin on server side
I0422 11:02:02.554951   26460 main.go:141] libmachine: Successfully made call to close driver server
I0422 11:02:02.554961   26460 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-668059 image ls --format yaml --alsologtostderr:
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-668059
size: "34114467"
- id: a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b
repoDigests:
- registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68
- registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "85932953"
- id: 259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67
- registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "63026502"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "65291810"
- id: 3bd89cf1f623f62b2581f758d26cf63e6cb668f91865aa43facdd1d1eb327577
repoDigests:
- localhost/minikube-local-cache-test@sha256:5ddf3df3423ee24e521b0319c90310ae74707b4a0066f33fdee87789e9c9845d
repoTags:
- localhost/minikube-local-cache-test:functional-668059
size: "3330"
- id: c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe
- registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "112170310"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81
- registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "117609952"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 2ac752d7aeb1d9281f708e7c51501c41baf90de15ffc9bca7c5d38b8da41b580
repoDigests:
- docker.io/library/nginx@sha256:0463a96ac74b84a8a1b27f3d1f4ae5d1a70ea823219394e131f5bf3536674419
- docker.io/library/nginx@sha256:b5873c5e785c0ae70b4f999d6719a27441126667088c2edd1eaf3060e4868ec5
repoTags:
- docker.io/library/nginx:latest
size: "191703878"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-668059 image ls --format yaml --alsologtostderr:
I0422 11:02:02.352150   26461 out.go:291] Setting OutFile to fd 1 ...
I0422 11:02:02.352287   26461 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 11:02:02.352299   26461 out.go:304] Setting ErrFile to fd 2...
I0422 11:02:02.352305   26461 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 11:02:02.352570   26461 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
I0422 11:02:02.353373   26461 config.go:182] Loaded profile config "functional-668059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0422 11:02:02.353539   26461 config.go:182] Loaded profile config "functional-668059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0422 11:02:02.354104   26461 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0422 11:02:02.354159   26461 main.go:141] libmachine: Launching plugin server for driver kvm2
I0422 11:02:02.371806   26461 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36389
I0422 11:02:02.372363   26461 main.go:141] libmachine: () Calling .GetVersion
I0422 11:02:02.372968   26461 main.go:141] libmachine: Using API Version  1
I0422 11:02:02.372992   26461 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 11:02:02.373406   26461 main.go:141] libmachine: () Calling .GetMachineName
I0422 11:02:02.373600   26461 main.go:141] libmachine: (functional-668059) Calling .GetState
I0422 11:02:02.375652   26461 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0422 11:02:02.375690   26461 main.go:141] libmachine: Launching plugin server for driver kvm2
I0422 11:02:02.390485   26461 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35761
I0422 11:02:02.391066   26461 main.go:141] libmachine: () Calling .GetVersion
I0422 11:02:02.391553   26461 main.go:141] libmachine: Using API Version  1
I0422 11:02:02.391570   26461 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 11:02:02.392113   26461 main.go:141] libmachine: () Calling .GetMachineName
I0422 11:02:02.392297   26461 main.go:141] libmachine: (functional-668059) Calling .DriverName
I0422 11:02:02.392427   26461 ssh_runner.go:195] Run: systemctl --version
I0422 11:02:02.392442   26461 main.go:141] libmachine: (functional-668059) Calling .GetSSHHostname
I0422 11:02:02.394900   26461 main.go:141] libmachine: (functional-668059) DBG | domain functional-668059 has defined MAC address 52:54:00:0f:9a:cb in network mk-functional-668059
I0422 11:02:02.395410   26461 main.go:141] libmachine: (functional-668059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:cb", ip: ""} in network mk-functional-668059: {Iface:virbr1 ExpiryTime:2024-04-22 11:52:12 +0000 UTC Type:0 Mac:52:54:00:0f:9a:cb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:functional-668059 Clientid:01:52:54:00:0f:9a:cb}
I0422 11:02:02.395477   26461 main.go:141] libmachine: (functional-668059) DBG | domain functional-668059 has defined IP address 192.168.39.220 and MAC address 52:54:00:0f:9a:cb in network mk-functional-668059
I0422 11:02:02.395818   26461 main.go:141] libmachine: (functional-668059) Calling .GetSSHPort
I0422 11:02:02.395988   26461 main.go:141] libmachine: (functional-668059) Calling .GetSSHKeyPath
I0422 11:02:02.396124   26461 main.go:141] libmachine: (functional-668059) Calling .GetSSHUsername
I0422 11:02:02.396235   26461 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/functional-668059/id_rsa Username:docker}
I0422 11:02:02.491077   26461 ssh_runner.go:195] Run: sudo crictl images --output json
I0422 11:02:02.553103   26461 main.go:141] libmachine: Making call to close driver server
I0422 11:02:02.553116   26461 main.go:141] libmachine: (functional-668059) Calling .Close
I0422 11:02:02.553396   26461 main.go:141] libmachine: Successfully made call to close driver server
I0422 11:02:02.553416   26461 main.go:141] libmachine: Making call to close connection to plugin binary
I0422 11:02:02.553433   26461 main.go:141] libmachine: Making call to close driver server
I0422 11:02:02.553441   26461 main.go:141] libmachine: (functional-668059) Calling .Close
I0422 11:02:02.553734   26461 main.go:141] libmachine: (functional-668059) DBG | Closing plugin on server side
I0422 11:02:02.553754   26461 main.go:141] libmachine: Successfully made call to close driver server
I0422 11:02:02.553782   26461 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-668059 ssh pgrep buildkitd: exit status 1 (211.885829ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 image build -t localhost/my-image:functional-668059 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-668059 image build -t localhost/my-image:functional-668059 testdata/build --alsologtostderr: (3.176840807s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-668059 image build -t localhost/my-image:functional-668059 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 3ac22764a54
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-668059
--> 19a30d7e0a2
Successfully tagged localhost/my-image:functional-668059
19a30d7e0a243655c9520c18a352ef8ef04bf446c705a3370c6f7e3333e556d8
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-668059 image build -t localhost/my-image:functional-668059 testdata/build --alsologtostderr:
I0422 11:02:02.829993   26581 out.go:291] Setting OutFile to fd 1 ...
I0422 11:02:02.830143   26581 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 11:02:02.830152   26581 out.go:304] Setting ErrFile to fd 2...
I0422 11:02:02.830156   26581 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 11:02:02.830382   26581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
I0422 11:02:02.830976   26581 config.go:182] Loaded profile config "functional-668059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0422 11:02:02.831505   26581 config.go:182] Loaded profile config "functional-668059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0422 11:02:02.831913   26581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0422 11:02:02.831957   26581 main.go:141] libmachine: Launching plugin server for driver kvm2
I0422 11:02:02.847031   26581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46565
I0422 11:02:02.847498   26581 main.go:141] libmachine: () Calling .GetVersion
I0422 11:02:02.848007   26581 main.go:141] libmachine: Using API Version  1
I0422 11:02:02.848028   26581 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 11:02:02.848379   26581 main.go:141] libmachine: () Calling .GetMachineName
I0422 11:02:02.848582   26581 main.go:141] libmachine: (functional-668059) Calling .GetState
I0422 11:02:02.850525   26581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0422 11:02:02.850572   26581 main.go:141] libmachine: Launching plugin server for driver kvm2
I0422 11:02:02.865123   26581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44955
I0422 11:02:02.865532   26581 main.go:141] libmachine: () Calling .GetVersion
I0422 11:02:02.866006   26581 main.go:141] libmachine: Using API Version  1
I0422 11:02:02.866028   26581 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 11:02:02.866376   26581 main.go:141] libmachine: () Calling .GetMachineName
I0422 11:02:02.866557   26581 main.go:141] libmachine: (functional-668059) Calling .DriverName
I0422 11:02:02.866764   26581 ssh_runner.go:195] Run: systemctl --version
I0422 11:02:02.866786   26581 main.go:141] libmachine: (functional-668059) Calling .GetSSHHostname
I0422 11:02:02.869321   26581 main.go:141] libmachine: (functional-668059) DBG | domain functional-668059 has defined MAC address 52:54:00:0f:9a:cb in network mk-functional-668059
I0422 11:02:02.869733   26581 main.go:141] libmachine: (functional-668059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:9a:cb", ip: ""} in network mk-functional-668059: {Iface:virbr1 ExpiryTime:2024-04-22 11:52:12 +0000 UTC Type:0 Mac:52:54:00:0f:9a:cb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:functional-668059 Clientid:01:52:54:00:0f:9a:cb}
I0422 11:02:02.869760   26581 main.go:141] libmachine: (functional-668059) DBG | domain functional-668059 has defined IP address 192.168.39.220 and MAC address 52:54:00:0f:9a:cb in network mk-functional-668059
I0422 11:02:02.869924   26581 main.go:141] libmachine: (functional-668059) Calling .GetSSHPort
I0422 11:02:02.870095   26581 main.go:141] libmachine: (functional-668059) Calling .GetSSHKeyPath
I0422 11:02:02.870223   26581 main.go:141] libmachine: (functional-668059) Calling .GetSSHUsername
I0422 11:02:02.870370   26581 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/functional-668059/id_rsa Username:docker}
I0422 11:02:02.953698   26581 build_images.go:161] Building image from path: /tmp/build.3276657353.tar
I0422 11:02:02.953768   26581 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0422 11:02:02.969450   26581 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3276657353.tar
I0422 11:02:02.974916   26581 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3276657353.tar: stat -c "%s %y" /var/lib/minikube/build/build.3276657353.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3276657353.tar': No such file or directory
I0422 11:02:02.974954   26581 ssh_runner.go:362] scp /tmp/build.3276657353.tar --> /var/lib/minikube/build/build.3276657353.tar (3072 bytes)
I0422 11:02:03.012236   26581 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3276657353
I0422 11:02:03.023779   26581 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3276657353 -xf /var/lib/minikube/build/build.3276657353.tar
I0422 11:02:03.039332   26581 crio.go:315] Building image: /var/lib/minikube/build/build.3276657353
I0422 11:02:03.039401   26581 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-668059 /var/lib/minikube/build/build.3276657353 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0422 11:02:05.917765   26581 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-668059 /var/lib/minikube/build/build.3276657353 --cgroup-manager=cgroupfs: (2.878318806s)
I0422 11:02:05.917848   26581 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3276657353
I0422 11:02:05.931613   26581 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3276657353.tar
I0422 11:02:05.944165   26581 build_images.go:217] Built localhost/my-image:functional-668059 from /tmp/build.3276657353.tar
I0422 11:02:05.944202   26581 build_images.go:133] succeeded building to: functional-668059
I0422 11:02:05.944207   26581 build_images.go:134] failed building to: 
I0422 11:02:05.944265   26581 main.go:141] libmachine: Making call to close driver server
I0422 11:02:05.944284   26581 main.go:141] libmachine: (functional-668059) Calling .Close
I0422 11:02:05.944563   26581 main.go:141] libmachine: Successfully made call to close driver server
I0422 11:02:05.944593   26581 main.go:141] libmachine: (functional-668059) DBG | Closing plugin on server side
I0422 11:02:05.944601   26581 main.go:141] libmachine: Making call to close connection to plugin binary
I0422 11:02:05.944614   26581 main.go:141] libmachine: Making call to close driver server
I0422 11:02:05.944622   26581 main.go:141] libmachine: (functional-668059) Calling .Close
I0422 11:02:05.944837   26581 main.go:141] libmachine: Successfully made call to close driver server
I0422 11:02:05.944853   26581 main.go:141] libmachine: Making call to close connection to plugin binary
I0422 11:02:05.944884   26581 main.go:141] libmachine: (functional-668059) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.114694134s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-668059
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-668059 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2439363402/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-668059 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2439363402/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-668059 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2439363402/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-668059 ssh "findmnt -T" /mount1: exit status 1 (385.201946ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-668059 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-668059 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2439363402/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-668059 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2439363402/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-668059 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2439363402/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 image load --daemon gcr.io/google-containers/addon-resizer:functional-668059 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-668059 image load --daemon gcr.io/google-containers/addon-resizer:functional-668059 --alsologtostderr: (4.682344645s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (11.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 image load --daemon gcr.io/google-containers/addon-resizer:functional-668059 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-668059 image load --daemon gcr.io/google-containers/addon-resizer:functional-668059 --alsologtostderr: (11.147180706s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (11.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.852626104s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-668059
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 image load --daemon gcr.io/google-containers/addon-resizer:functional-668059 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-668059 image load --daemon gcr.io/google-containers/addon-resizer:functional-668059 --alsologtostderr: (4.838620565s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 image save gcr.io/google-containers/addon-resizer:functional-668059 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
E0422 11:01:57.324942   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-668059 image save gcr.io/google-containers/addon-resizer:functional-668059 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.326160286s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 image rm gcr.io/google-containers/addon-resizer:functional-668059 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-668059 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.358112742s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-668059
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-668059 image save --daemon gcr.io/google-containers/addon-resizer:functional-668059 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-668059 image save --daemon gcr.io/google-containers/addon-resizer:functional-668059 --alsologtostderr: (1.490206005s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-668059
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.53s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-668059
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-668059
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-668059
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (211.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-821265 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0422 11:06:57.324943   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
E0422 11:08:20.372573   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-821265 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m30.33156038s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (211.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-821265 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-821265 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-821265 -- rollout status deployment/busybox: (4.460630105s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-821265 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-821265 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-821265 -- exec busybox-fc5497c4f-b4r5w -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-821265 -- exec busybox-fc5497c4f-ft78k -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-821265 -- exec busybox-fc5497c4f-fzcrw -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-821265 -- exec busybox-fc5497c4f-b4r5w -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-821265 -- exec busybox-fc5497c4f-ft78k -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-821265 -- exec busybox-fc5497c4f-fzcrw -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-821265 -- exec busybox-fc5497c4f-b4r5w -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-821265 -- exec busybox-fc5497c4f-ft78k -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-821265 -- exec busybox-fc5497c4f-fzcrw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-821265 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-821265 -- exec busybox-fc5497c4f-b4r5w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-821265 -- exec busybox-fc5497c4f-b4r5w -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-821265 -- exec busybox-fc5497c4f-ft78k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-821265 -- exec busybox-fc5497c4f-ft78k -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-821265 -- exec busybox-fc5497c4f-fzcrw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-821265 -- exec busybox-fc5497c4f-fzcrw -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (48.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-821265 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-821265 -v=7 --alsologtostderr: (47.74231909s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (48.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-821265 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 cp testdata/cp-test.txt ha-821265:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 ssh -n ha-821265 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 cp ha-821265:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1102049705/001/cp-test_ha-821265.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 ssh -n ha-821265 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 cp ha-821265:/home/docker/cp-test.txt ha-821265-m02:/home/docker/cp-test_ha-821265_ha-821265-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 ssh -n ha-821265 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 ssh -n ha-821265-m02 "sudo cat /home/docker/cp-test_ha-821265_ha-821265-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 cp ha-821265:/home/docker/cp-test.txt ha-821265-m03:/home/docker/cp-test_ha-821265_ha-821265-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 ssh -n ha-821265 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 ssh -n ha-821265-m03 "sudo cat /home/docker/cp-test_ha-821265_ha-821265-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 cp ha-821265:/home/docker/cp-test.txt ha-821265-m04:/home/docker/cp-test_ha-821265_ha-821265-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 ssh -n ha-821265 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 ssh -n ha-821265-m04 "sudo cat /home/docker/cp-test_ha-821265_ha-821265-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 cp testdata/cp-test.txt ha-821265-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 ssh -n ha-821265-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 cp ha-821265-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1102049705/001/cp-test_ha-821265-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 ssh -n ha-821265-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 cp ha-821265-m02:/home/docker/cp-test.txt ha-821265:/home/docker/cp-test_ha-821265-m02_ha-821265.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 ssh -n ha-821265-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 ssh -n ha-821265 "sudo cat /home/docker/cp-test_ha-821265-m02_ha-821265.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 cp ha-821265-m02:/home/docker/cp-test.txt ha-821265-m03:/home/docker/cp-test_ha-821265-m02_ha-821265-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 ssh -n ha-821265-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 ssh -n ha-821265-m03 "sudo cat /home/docker/cp-test_ha-821265-m02_ha-821265-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 cp ha-821265-m02:/home/docker/cp-test.txt ha-821265-m04:/home/docker/cp-test_ha-821265-m02_ha-821265-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 ssh -n ha-821265-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 ssh -n ha-821265-m04 "sudo cat /home/docker/cp-test_ha-821265-m02_ha-821265-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 cp testdata/cp-test.txt ha-821265-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 ssh -n ha-821265-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 cp ha-821265-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1102049705/001/cp-test_ha-821265-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 ssh -n ha-821265-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 cp ha-821265-m03:/home/docker/cp-test.txt ha-821265:/home/docker/cp-test_ha-821265-m03_ha-821265.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 ssh -n ha-821265-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 ssh -n ha-821265 "sudo cat /home/docker/cp-test_ha-821265-m03_ha-821265.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 cp ha-821265-m03:/home/docker/cp-test.txt ha-821265-m02:/home/docker/cp-test_ha-821265-m03_ha-821265-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 ssh -n ha-821265-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 ssh -n ha-821265-m02 "sudo cat /home/docker/cp-test_ha-821265-m03_ha-821265-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 cp ha-821265-m03:/home/docker/cp-test.txt ha-821265-m04:/home/docker/cp-test_ha-821265-m03_ha-821265-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 ssh -n ha-821265-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 ssh -n ha-821265-m04 "sudo cat /home/docker/cp-test_ha-821265-m03_ha-821265-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 cp testdata/cp-test.txt ha-821265-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 ssh -n ha-821265-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 cp ha-821265-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1102049705/001/cp-test_ha-821265-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 ssh -n ha-821265-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 cp ha-821265-m04:/home/docker/cp-test.txt ha-821265:/home/docker/cp-test_ha-821265-m04_ha-821265.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 ssh -n ha-821265-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 ssh -n ha-821265 "sudo cat /home/docker/cp-test_ha-821265-m04_ha-821265.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 cp ha-821265-m04:/home/docker/cp-test.txt ha-821265-m02:/home/docker/cp-test_ha-821265-m04_ha-821265-m02.txt
E0422 11:11:17.644588   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
E0422 11:11:17.649910   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
E0422 11:11:17.660147   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
E0422 11:11:17.680405   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
E0422 11:11:17.720691   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
E0422 11:11:17.800999   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 ssh -n ha-821265-m04 "sudo cat /home/docker/cp-test.txt"
E0422 11:11:17.961541   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 ssh -n ha-821265-m02 "sudo cat /home/docker/cp-test_ha-821265-m04_ha-821265-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 cp ha-821265-m04:/home/docker/cp-test.txt ha-821265-m03:/home/docker/cp-test_ha-821265-m04_ha-821265-m03.txt
E0422 11:11:18.282305   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 ssh -n ha-821265-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 ssh -n ha-821265-m03 "sudo cat /home/docker/cp-test_ha-821265-m04_ha-821265-m03.txt"
E0422 11:11:18.923070   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
--- PASS: TestMultiControlPlane/serial/CopyFile (13.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.47905225s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-821265 node delete m03 -v=7 --alsologtostderr: (16.934004641s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (500.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-821265 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0422 11:25:00.374325   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
E0422 11:26:17.644182   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
E0422 11:26:57.324946   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
E0422 11:27:40.689616   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
E0422 11:31:17.643588   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-821265 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (8m20.050336671s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (500.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-821265 --control-plane -v=7 --alsologtostderr
E0422 11:31:57.324876   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-821265 --control-plane -v=7 --alsologtostderr: (1m16.053785595s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-821265 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.56s)

                                                
                                    
x
+
TestJSONOutput/start/Command (61.87s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-778865 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-778865 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m1.868218579s)
--- PASS: TestJSONOutput/start/Command (61.87s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.78s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-778865 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-778865 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.41s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-778865 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-778865 --output=json --user=testUser: (7.409971383s)
--- PASS: TestJSONOutput/stop/Command (7.41s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-734490 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-734490 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (76.615638ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0c73d923-5e11-4820-bb22-b21ba6daacc5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-734490] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6fa561db-d0ca-438e-8af0-9940a0adc69c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18711"}}
	{"specversion":"1.0","id":"18e11190-8561-4131-9fa8-edb459243e7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0ee2cb8b-1d80-4347-b0a3-b2e5aef1c3a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18711-7633/kubeconfig"}}
	{"specversion":"1.0","id":"5fc9b99a-1ba2-4f21-88fc-9de6c6144973","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18711-7633/.minikube"}}
	{"specversion":"1.0","id":"908ee2d4-da70-4e6f-8d7e-f7181d7a9f49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"7f7a1441-aee9-4872-8a22-132239363f78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"360f30a4-3d37-412b-9568-169cba9ba4ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-734490" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-734490
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (96.02s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-336046 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-336046 --driver=kvm2  --container-runtime=crio: (46.631241003s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-338730 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-338730 --driver=kvm2  --container-runtime=crio: (46.542743703s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-336046
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-338730
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-338730" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-338730
helpers_test.go:175: Cleaning up "first-336046" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-336046
--- PASS: TestMinikubeProfile (96.02s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (30.98s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-928849 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0422 11:36:17.644325   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-928849 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.974849333s)
--- PASS: TestMountStart/serial/StartWithMountFirst (30.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-928849 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-928849 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.22s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-941513 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0422 11:36:57.327558   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-941513 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.218269698s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-941513 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-941513 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-928849 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-941513 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-941513 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.41s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-941513
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-941513: (1.409533823s)
--- PASS: TestMountStart/serial/Stop (1.41s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.21s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-941513
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-941513: (22.205141046s)
--- PASS: TestMountStart/serial/RestartStopped (23.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-941513 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-941513 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (136.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-254635 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-254635 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m15.788514386s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (136.21s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-254635 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-254635 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-254635 -- rollout status deployment/busybox: (3.806030554s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-254635 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-254635 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-254635 -- exec busybox-fc5497c4f-c9hnn -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-254635 -- exec busybox-fc5497c4f-w6wst -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-254635 -- exec busybox-fc5497c4f-c9hnn -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-254635 -- exec busybox-fc5497c4f-w6wst -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-254635 -- exec busybox-fc5497c4f-c9hnn -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-254635 -- exec busybox-fc5497c4f-w6wst -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.46s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-254635 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-254635 -- exec busybox-fc5497c4f-c9hnn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-254635 -- exec busybox-fc5497c4f-c9hnn -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-254635 -- exec busybox-fc5497c4f-w6wst -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-254635 -- exec busybox-fc5497c4f-w6wst -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-254635 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-254635 -v 3 --alsologtostderr: (45.067956388s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.66s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-254635 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 cp testdata/cp-test.txt multinode-254635:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 ssh -n multinode-254635 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 cp multinode-254635:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile714579271/001/cp-test_multinode-254635.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 ssh -n multinode-254635 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 cp multinode-254635:/home/docker/cp-test.txt multinode-254635-m02:/home/docker/cp-test_multinode-254635_multinode-254635-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 ssh -n multinode-254635 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 ssh -n multinode-254635-m02 "sudo cat /home/docker/cp-test_multinode-254635_multinode-254635-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 cp multinode-254635:/home/docker/cp-test.txt multinode-254635-m03:/home/docker/cp-test_multinode-254635_multinode-254635-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 ssh -n multinode-254635 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 ssh -n multinode-254635-m03 "sudo cat /home/docker/cp-test_multinode-254635_multinode-254635-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 cp testdata/cp-test.txt multinode-254635-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 ssh -n multinode-254635-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 cp multinode-254635-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile714579271/001/cp-test_multinode-254635-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 ssh -n multinode-254635-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 cp multinode-254635-m02:/home/docker/cp-test.txt multinode-254635:/home/docker/cp-test_multinode-254635-m02_multinode-254635.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 ssh -n multinode-254635-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 ssh -n multinode-254635 "sudo cat /home/docker/cp-test_multinode-254635-m02_multinode-254635.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 cp multinode-254635-m02:/home/docker/cp-test.txt multinode-254635-m03:/home/docker/cp-test_multinode-254635-m02_multinode-254635-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 ssh -n multinode-254635-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 ssh -n multinode-254635-m03 "sudo cat /home/docker/cp-test_multinode-254635-m02_multinode-254635-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 cp testdata/cp-test.txt multinode-254635-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 ssh -n multinode-254635-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 cp multinode-254635-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile714579271/001/cp-test_multinode-254635-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 ssh -n multinode-254635-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 cp multinode-254635-m03:/home/docker/cp-test.txt multinode-254635:/home/docker/cp-test_multinode-254635-m03_multinode-254635.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 ssh -n multinode-254635-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 ssh -n multinode-254635 "sudo cat /home/docker/cp-test_multinode-254635-m03_multinode-254635.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 cp multinode-254635-m03:/home/docker/cp-test.txt multinode-254635-m02:/home/docker/cp-test_multinode-254635-m03_multinode-254635-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 ssh -n multinode-254635-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 ssh -n multinode-254635-m02 "sudo cat /home/docker/cp-test_multinode-254635-m03_multinode-254635-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.49s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-254635 node stop m03: (1.628054166s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-254635 status: exit status 7 (435.324234ms)

                                                
                                                
-- stdout --
	multinode-254635
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-254635-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-254635-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-254635 status --alsologtostderr: exit status 7 (430.270897ms)

                                                
                                                
-- stdout --
	multinode-254635
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-254635-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-254635-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 11:40:46.473812   45700 out.go:291] Setting OutFile to fd 1 ...
	I0422 11:40:46.473953   45700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:40:46.473965   45700 out.go:304] Setting ErrFile to fd 2...
	I0422 11:40:46.473972   45700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 11:40:46.474180   45700 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18711-7633/.minikube/bin
	I0422 11:40:46.474353   45700 out.go:298] Setting JSON to false
	I0422 11:40:46.474379   45700 mustload.go:65] Loading cluster: multinode-254635
	I0422 11:40:46.474436   45700 notify.go:220] Checking for updates...
	I0422 11:40:46.474728   45700 config.go:182] Loaded profile config "multinode-254635": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 11:40:46.474744   45700 status.go:255] checking status of multinode-254635 ...
	I0422 11:40:46.475109   45700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:40:46.475161   45700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:40:46.492424   45700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33313
	I0422 11:40:46.492806   45700 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:40:46.493400   45700 main.go:141] libmachine: Using API Version  1
	I0422 11:40:46.493420   45700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:40:46.493773   45700 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:40:46.493962   45700 main.go:141] libmachine: (multinode-254635) Calling .GetState
	I0422 11:40:46.495585   45700 status.go:330] multinode-254635 host status = "Running" (err=<nil>)
	I0422 11:40:46.495609   45700 host.go:66] Checking if "multinode-254635" exists ...
	I0422 11:40:46.495874   45700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:40:46.495911   45700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:40:46.511446   45700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46059
	I0422 11:40:46.511796   45700 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:40:46.512366   45700 main.go:141] libmachine: Using API Version  1
	I0422 11:40:46.512384   45700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:40:46.512652   45700 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:40:46.512891   45700 main.go:141] libmachine: (multinode-254635) Calling .GetIP
	I0422 11:40:46.515657   45700 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:40:46.516128   45700 main.go:141] libmachine: (multinode-254635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:1f:f6", ip: ""} in network mk-multinode-254635: {Iface:virbr1 ExpiryTime:2024-04-22 12:37:44 +0000 UTC Type:0 Mac:52:54:00:e2:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-254635 Clientid:01:52:54:00:e2:1f:f6}
	I0422 11:40:46.516157   45700 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined IP address 192.168.39.185 and MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:40:46.516271   45700 host.go:66] Checking if "multinode-254635" exists ...
	I0422 11:40:46.516631   45700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:40:46.516689   45700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:40:46.531484   45700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36075
	I0422 11:40:46.531871   45700 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:40:46.532310   45700 main.go:141] libmachine: Using API Version  1
	I0422 11:40:46.532330   45700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:40:46.532613   45700 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:40:46.532802   45700 main.go:141] libmachine: (multinode-254635) Calling .DriverName
	I0422 11:40:46.532973   45700 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:40:46.533005   45700 main.go:141] libmachine: (multinode-254635) Calling .GetSSHHostname
	I0422 11:40:46.535545   45700 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:40:46.535931   45700 main.go:141] libmachine: (multinode-254635) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:1f:f6", ip: ""} in network mk-multinode-254635: {Iface:virbr1 ExpiryTime:2024-04-22 12:37:44 +0000 UTC Type:0 Mac:52:54:00:e2:1f:f6 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-254635 Clientid:01:52:54:00:e2:1f:f6}
	I0422 11:40:46.535955   45700 main.go:141] libmachine: (multinode-254635) DBG | domain multinode-254635 has defined IP address 192.168.39.185 and MAC address 52:54:00:e2:1f:f6 in network mk-multinode-254635
	I0422 11:40:46.536109   45700 main.go:141] libmachine: (multinode-254635) Calling .GetSSHPort
	I0422 11:40:46.536233   45700 main.go:141] libmachine: (multinode-254635) Calling .GetSSHKeyPath
	I0422 11:40:46.536335   45700 main.go:141] libmachine: (multinode-254635) Calling .GetSSHUsername
	I0422 11:40:46.536479   45700 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/multinode-254635/id_rsa Username:docker}
	I0422 11:40:46.617528   45700 ssh_runner.go:195] Run: systemctl --version
	I0422 11:40:46.624251   45700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:40:46.640974   45700 kubeconfig.go:125] found "multinode-254635" server: "https://192.168.39.185:8443"
	I0422 11:40:46.641067   45700 api_server.go:166] Checking apiserver status ...
	I0422 11:40:46.641141   45700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 11:40:46.658612   45700 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1156/cgroup
	W0422 11:40:46.673031   45700 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1156/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 11:40:46.673073   45700 ssh_runner.go:195] Run: ls
	I0422 11:40:46.677694   45700 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0422 11:40:46.681863   45700 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0422 11:40:46.681884   45700 status.go:422] multinode-254635 apiserver status = Running (err=<nil>)
	I0422 11:40:46.681895   45700 status.go:257] multinode-254635 status: &{Name:multinode-254635 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 11:40:46.681911   45700 status.go:255] checking status of multinode-254635-m02 ...
	I0422 11:40:46.682216   45700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:40:46.682252   45700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:40:46.696896   45700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40795
	I0422 11:40:46.697267   45700 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:40:46.697678   45700 main.go:141] libmachine: Using API Version  1
	I0422 11:40:46.697700   45700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:40:46.697965   45700 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:40:46.698088   45700 main.go:141] libmachine: (multinode-254635-m02) Calling .GetState
	I0422 11:40:46.699509   45700 status.go:330] multinode-254635-m02 host status = "Running" (err=<nil>)
	I0422 11:40:46.699526   45700 host.go:66] Checking if "multinode-254635-m02" exists ...
	I0422 11:40:46.699771   45700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:40:46.699804   45700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:40:46.713864   45700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42353
	I0422 11:40:46.714244   45700 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:40:46.714750   45700 main.go:141] libmachine: Using API Version  1
	I0422 11:40:46.714777   45700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:40:46.715141   45700 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:40:46.715450   45700 main.go:141] libmachine: (multinode-254635-m02) Calling .GetIP
	I0422 11:40:46.718433   45700 main.go:141] libmachine: (multinode-254635-m02) DBG | domain multinode-254635-m02 has defined MAC address 52:54:00:3e:24:96 in network mk-multinode-254635
	I0422 11:40:46.718910   45700 main.go:141] libmachine: (multinode-254635-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:24:96", ip: ""} in network mk-multinode-254635: {Iface:virbr1 ExpiryTime:2024-04-22 12:39:18 +0000 UTC Type:0 Mac:52:54:00:3e:24:96 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-254635-m02 Clientid:01:52:54:00:3e:24:96}
	I0422 11:40:46.718932   45700 main.go:141] libmachine: (multinode-254635-m02) DBG | domain multinode-254635-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:3e:24:96 in network mk-multinode-254635
	I0422 11:40:46.719096   45700 host.go:66] Checking if "multinode-254635-m02" exists ...
	I0422 11:40:46.719380   45700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:40:46.719417   45700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:40:46.733929   45700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37909
	I0422 11:40:46.734382   45700 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:40:46.734832   45700 main.go:141] libmachine: Using API Version  1
	I0422 11:40:46.734851   45700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:40:46.735114   45700 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:40:46.735290   45700 main.go:141] libmachine: (multinode-254635-m02) Calling .DriverName
	I0422 11:40:46.735435   45700 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 11:40:46.735461   45700 main.go:141] libmachine: (multinode-254635-m02) Calling .GetSSHHostname
	I0422 11:40:46.738103   45700 main.go:141] libmachine: (multinode-254635-m02) DBG | domain multinode-254635-m02 has defined MAC address 52:54:00:3e:24:96 in network mk-multinode-254635
	I0422 11:40:46.738521   45700 main.go:141] libmachine: (multinode-254635-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:24:96", ip: ""} in network mk-multinode-254635: {Iface:virbr1 ExpiryTime:2024-04-22 12:39:18 +0000 UTC Type:0 Mac:52:54:00:3e:24:96 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-254635-m02 Clientid:01:52:54:00:3e:24:96}
	I0422 11:40:46.738551   45700 main.go:141] libmachine: (multinode-254635-m02) DBG | domain multinode-254635-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:3e:24:96 in network mk-multinode-254635
	I0422 11:40:46.738687   45700 main.go:141] libmachine: (multinode-254635-m02) Calling .GetSSHPort
	I0422 11:40:46.738838   45700 main.go:141] libmachine: (multinode-254635-m02) Calling .GetSSHKeyPath
	I0422 11:40:46.738992   45700 main.go:141] libmachine: (multinode-254635-m02) Calling .GetSSHUsername
	I0422 11:40:46.739092   45700 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18711-7633/.minikube/machines/multinode-254635-m02/id_rsa Username:docker}
	I0422 11:40:46.816458   45700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 11:40:46.832163   45700 status.go:257] multinode-254635-m02 status: &{Name:multinode-254635-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0422 11:40:46.832192   45700 status.go:255] checking status of multinode-254635-m03 ...
	I0422 11:40:46.832531   45700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 11:40:46.832572   45700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 11:40:46.847390   45700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40433
	I0422 11:40:46.847816   45700 main.go:141] libmachine: () Calling .GetVersion
	I0422 11:40:46.848248   45700 main.go:141] libmachine: Using API Version  1
	I0422 11:40:46.848274   45700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 11:40:46.848601   45700 main.go:141] libmachine: () Calling .GetMachineName
	I0422 11:40:46.848836   45700 main.go:141] libmachine: (multinode-254635-m03) Calling .GetState
	I0422 11:40:46.850268   45700 status.go:330] multinode-254635-m03 host status = "Stopped" (err=<nil>)
	I0422 11:40:46.850281   45700 status.go:343] host is not running, skipping remaining checks
	I0422 11:40:46.850288   45700 status.go:257] multinode-254635-m03 status: &{Name:multinode-254635-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.49s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (32.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 node start m03 -v=7 --alsologtostderr
E0422 11:41:17.644304   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-254635 node start m03 -v=7 --alsologtostderr: (31.500342275s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (32.14s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-254635 node delete m03: (1.676377672s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (173.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-254635 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0422 11:51:17.643764   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-254635 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m53.159722468s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-254635 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (173.70s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-254635
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-254635-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-254635-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (73.876838ms)

                                                
                                                
-- stdout --
	* [multinode-254635-m02] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18711-7633/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18711-7633/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-254635-m02' is duplicated with machine name 'multinode-254635-m02' in profile 'multinode-254635'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-254635-m03 --driver=kvm2  --container-runtime=crio
E0422 11:51:57.324888   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-254635-m03 --driver=kvm2  --container-runtime=crio: (46.566635665s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-254635
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-254635: exit status 80 (236.851109ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-254635 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-254635-m03 already exists in multinode-254635-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-254635-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.93s)

                                                
                                    
x
+
TestScheduledStopUnix (116.52s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-260164 --memory=2048 --driver=kvm2  --container-runtime=crio
E0422 11:58:20.376947   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/addons-649657/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-260164 --memory=2048 --driver=kvm2  --container-runtime=crio: (44.779571201s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-260164 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-260164 -n scheduled-stop-260164
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-260164 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-260164 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-260164 -n scheduled-stop-260164
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-260164
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-260164 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-260164
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-260164: exit status 7 (81.746166ms)

                                                
                                                
-- stdout --
	scheduled-stop-260164
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-260164 -n scheduled-stop-260164
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-260164 -n scheduled-stop-260164: exit status 7 (75.788991ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-260164" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-260164
--- PASS: TestScheduledStopUnix (116.52s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (193.4s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2908492001 start -p running-upgrade-307156 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2908492001 start -p running-upgrade-307156 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m40.854448922s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-307156 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-307156 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m28.305780595s)
helpers_test.go:175: Cleaning up "running-upgrade-307156" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-307156
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-307156: (1.269034242s)
--- PASS: TestRunningBinaryUpgrade (193.40s)

                                                
                                    
x
+
TestPause/serial/Start (127.2s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-253908 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-253908 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m7.201551834s)
--- PASS: TestPause/serial/Start (127.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-483459 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-483459 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (98.416473ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-483459] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18711-7633/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18711-7633/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (122.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-483459 --driver=kvm2  --container-runtime=crio
E0422 12:01:00.692430   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
E0422 12:01:17.644243   14945 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18711-7633/.minikube/profiles/functional-668059/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-483459 --driver=kvm2  --container-runtime=crio: (2m2.634516545s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-483459 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (122.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (48.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-483459 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-483459 --no-kubernetes --driver=kvm2  --container-runtime=crio: (47.03316214s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-483459 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-483459 status -o json: exit status 2 (253.936844ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-483459","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-483459
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (48.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (34.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-483459 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-483459 --no-kubernetes --driver=kvm2  --container-runtime=crio: (34.707520849s)
--- PASS: TestNoKubernetes/serial/Start (34.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-483459 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-483459 "sudo systemctl is-active --quiet service kubelet": exit status 1 (241.991596ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.128649844s)
--- PASS: TestNoKubernetes/serial/ProfileList (1.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-483459
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-483459: (2.324968966s)
--- PASS: TestNoKubernetes/serial/Stop (2.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (70.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-483459 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-483459 --driver=kvm2  --container-runtime=crio: (1m10.105182984s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (70.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-483459 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-483459 "sudo systemctl is-active --quiet service kubelet": exit status 1 (212.390835ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (101.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3797744260 start -p stopped-upgrade-178757 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3797744260 start -p stopped-upgrade-178757 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (54.354233555s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3797744260 -p stopped-upgrade-178757 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3797744260 -p stopped-upgrade-178757 stop: (2.114402286s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-178757 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-178757 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (45.474165382s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (101.94s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-178757
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                    

Test skip (33/221)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard